ACFS-05913: unable to contact the standby node stbyrac1

Problem:

I was trying to setup ACFS replication, where one of the steps is to validate keys using acfsutil, which failed with ACFS-05913 error:

[root@rac1 .ssh]# acfsutil repl info -c -u oggrepl stbyrac1 stbyrac2 /GG
acfsutil repl info: ACFS-05913: unable to contact the standby node stbyrac1
acfsutil repl info: ACFS-05913: unable to contact the standby node stbyrac2

Cause: 

An attempt to use the ping utility to contact a standby node failed.

Solution:

Enable ICMP traffic between these nodes and retry validation:

[root@rac1 .ssh]# acfsutil repl info -c -u oggrepl stbyrac1 stbyrac2 /GG
A valid 'ssh' connection was detected for standby node stbyrac1 as user oggrepl.
A valid 'ssh' connection was detected for standby node stbyrac2 as user oggrepl.

srvctl start filesystem hangs

The title of this post is general, there can be a lot of reasons why srvctl start filesystem hangs. The aim of this blog post is to share one of the reasons only.

Problem:

I’ve created ACFS volume and added it to srvctl:

$ asmcmd volcreate -G OGG -s 10G ACFSGG
# srvctl add filesystem -device /dev/asm/acfsgg-11 -path /GG_HOME -volume acfsgg -diskgroup OGG -user oracle -fstype ACFS

then tried to start the filesystem using:

# srvctl start filesystem -device /dev/asm/acfsgg-11

Which hanged.

Troubleshooting:

I’ve checked logs under trace folder under GI base, but could not find any clue. Even worse, stopping filesystem was also hanging.

But let’s stop here, the file that should have been checked was really there, but I missed it and checked wrong files. The file name that shows the necessary error is mount_<process id>.trc and is definitely located under trace folder. So instead of manually mounting filesystem to see the error, you can just open that mount_<process id>.trc and you will see the reason there.

Then I tried manual mounting of the filesystem, without srvctl:

[root@stbyrac1 trace]# /bin/mount -t acfs  /dev/asm/acfsgg-11 /GG_HOME
mount.acfs: ACFS-03037: not an ACFS file system

saw the error, which explained what was happening. My volume was not formatted with acfs filesystem. Somehow I missed that step on the standby cluster, so just a human error, but srvctl at least should have said that instead of hanging and placing info in trace file.

Solution:

Format ACFS volume:

[root@stbyrac1 trace]# mkfs -t acfs /dev/asm/acfsgg-11
mkfs.acfs: version                   = 19.0.0.0.0
mkfs.acfs: on-disk version           = 46.0
mkfs.acfs: volume                    = /dev/asm/acfsgg-11
mkfs.acfs: volume size               = 10737418240  (  10.00 GB )
mkfs.acfs: Format complete.

Because the start and stop operations are hanged, you need to mount filesystem on all database nodes manually:

[root@stbyrac1 ~]# /bin/mount -t acfs  /dev/asm/acfsgg-11 /GG_HOME
[root@stbyrac1 ~]# /bin/mount -t acfs  /dev/asm/acfsgg-11 /GG_HOME

Now try to stop and start filesystem, to make sure srvctl is able to do it’s job without any manual interaction:

[root@stbyrac1 ~]# srvctl stop filesystem -device /dev/asm/acfsgg-11
[root@stbyrac1 ~]# srvctl start filesystem -device /dev/asm/acfsgg-11

Resize ASM disks in GCP (FG enabled cluster)

Increasing disks in GCP is an online procedure and you don’t have to stop the VM.

1. If the node is a database node, stop all local database instances running on the node.

2. Go to the Disks page -> click the name of the disk that you want to resize -> click Edit -> enter the new size in Size field -> Save.

Please note that all data disks (not quorum disk) must be increased under the same diskgroup, otherwise ASM will not let you to have different sized disks.

Choose another data disks and repeat the same steps.

3. Run the following on database nodes via root user:

# for i in /sys/block/*/device/rescan; do echo 1 > $i; done

4. Check new disk sizes:

If it is Fg cluster, Phys_GiB column must show increased size:

[root@rac1 ~]# flashgrid-dg show -G DATA
...
------------------------------------------------------------
FailGroup ASM_Disk_Name Drive Phys_GiB  ASM_GiB  Status
------------------------------------------------------------
RAC1 RAC1$SHARED_2 /dev/flashgrid/rac1.shared-2 15 10 ONLINE
RAC2 RAC2$SHARED_2 /dev/flashgrid/rac2.shared-2 15 10 ONLINE
RACQ RACQ$SHARED_3 /dev/flashgrid/racq.shared-3 1  1  ONLINE
------------------------------------------------------------

In case it is a normal cluster, OS_MB must show increased size:

# su - grid
$ sqlplus / as sysasm
SQL> select TOTAL_MB/1024,OS_MB/1024 from v$asm_disk where GROUP_NUMBER=2;

TOTAL_MB/1024 OS_MB/1024
------------- ----------
	   10	      15
	   10	      15
	    1	       1

5. Connect to the ASM from any database node and run:

# su - grid
$ sqlplus / as sysasm
SQL> ALTER DISKGROUP DATA RESIZE ALL; 

The above command will resize all disks in the specified diskgroup based on their size returned by OS.

6. Check new sizes:

Fg cluster:

[root@rac1 ~]# flashgrid-dg show -G DATA
...
------------------------------------------------------------
RAC1 RAC1$SHARED_2 /dev/flashgrid/rac1.shared-2 15 15 ONLINE
RAC2 RAC2$SHARED_2 /dev/flashgrid/rac2.shared-2 15 15 ONLINE
RACQ RACQ$SHARED_3 /dev/flashgrid/racq.shared-3 1  1  ONLINE
------------------------------------------------------------

Normal cluster:

SQL> select TOTAL_MB/1024,OS_MB/1024 from v$asm_disk where GROUP_NUMBER=2 ;

TOTAL_MB/1024 OS_MB/1024
------------- ----------
	   15	      15
	   15	      15
	    1	       1

Phys_GiB and ASM_GiB should have the same increased size, which means disks are resized and you can use extended space.

Resize ASM disks in Azure (FG enabled cluster)

1. If the node is a database node, stop all local database instances running on the node.

2. Stop database VM from Azure console. In azure you are not able to resize disks while VM is running, so we need to stop it first.

3. Increase all database disks belonging to the same diskgroup to the desired size. Make sure disks in the same diskgroup have the same sizes.

To resize disk, click VM -> Disks -> choose data disk (in my case 10GB disk is a DATA disk)

After clicking the above disk, you will be redirected to the following screen, choose Configuration -> enter desired disk size (in my case I’ve changed from 10 to 15) -> Save

4. Start the database node.

5. Repeat 1-4 steps for the next database nodes (no need to increase disks for quorum, it is only necessary for the database nodes)

6. Check new disk sizes:

If it is Fg cluster, Phys_GiB column must show increased size:

[root@rac1 ~]# flashgrid-dg show -G DATA
...
------------------------------------------------------------
FailGroup ASM_Disk_Name Drive Phys_GiB  ASM_GiB  Status
------------------------------------------------------------
RAC1    RAC1$LUN2     /dev/flashgrid/rac1.lun2 15  10 ONLINE
RAC2    RAC2$LUN2     /dev/flashgrid/rac2.lun2 15  10 ONLINE
RACQ    RACQ$LUN3     /dev/flashgrid/racq.lun3  1  1  ONLINE
------------------------------------------------------------

In case it is a normal cluster, OS_MB must show increased size:

# su - grid
$ sqlplus / as sysasm
SQL> select TOTAL_MB/1024,OS_MB/1024 from v$asm_disk where GROUP_NUMBER=2 ;

TOTAL_MB/1024 OS_MB/1024
------------- ----------
	   10	      15
	   10	      15
	    1	       1

7. Connect to the ASM from any database node and run:

# su - grid
$ sqlplus / as sysasm
SQL> ALTER DISKGROUP DATA RESIZE ALL; 

The above command will resize all disks in the specified diskgroup based on their size returned by OS.

8. Check new sizes:

Fg cluster:

[root@rac1 ~]# flashgrid-dg show -G DATA
...
------------------------------------------------------------
FailGroup ASM_Disk_Name Drive Phys_GiB  ASM_GiB  Status
------------------------------------------------------------
RAC1    RAC1$LUN2     /dev/flashgrid/rac1.lun2 15  15 ONLINE
RAC2    RAC2$LUN2     /dev/flashgrid/rac2.lun2 15  15 ONLINE
RACQ    RACQ$LUN3     /dev/flashgrid/racq.lun3  1  1  ONLINE
------------------------------------------------------------

Normal cluster:

SQL> select TOTAL_MB/1024,OS_MB/1024 from v$asm_disk where GROUP_NUMBER=2 ;

TOTAL_MB/1024 OS_MB/1024
------------- ----------
	   15	      15
	   15	      15
	    1	       1

Phys_GiB and ASM_GiB should have the same increased size, which means disks are resized and you can use extended space.

‘udev’ rules continuously being reloaded resulted in ASM nvme disks going offline

Environment:

Linux Server release 7.2
kernel-3.10.0-514.26.2.el7.x86_64

Problem:

When Oracle processes are opening the device for writing and then closing it, this synthesizes a change event. And udev rules having  ACTION=="add|change" gets reloaded. This behavior causes ASM nvme disks to go offline:

Thu Jul 09 16:33:16 2020
WARNING: Disk 18 (rac1$disk1) in group 2 mode 0x7f is now being offlined

Fri Jul 10 10:04:34 2020
WARNING: Disk 19 (rac1$disk5) in group 2 mode 0x7f is now being offlined

Fri Jul 10 13:45:45 2020
WARNING: Disk 15 (rac1$disk8) in group 2 mode 0x7f is now being offlined

Solution:

To suppress the false positive change events disable the inotify watch for devices used for Oracle ASM using following steps:

  1.  Create /etc/udev/rules.d/96-nvme-nowatch.rules file with just one line in it:
ACTION=="add|change", KERNEL=="nvme*", OPTIONS:="nowatch"

2. After creating the file please run the following to activate the rule:

# udevadm control --reload-rules
# udevadm trigger --type=devices --action=change

The above command will reload the complete udev configuration and will trigger all the udev rules. On a busy production system this could disrupt ongoing operations, applications running on the server. Please use the above command during a scheduled maintenance window only.

Source: https://access.redhat.com/solutions/1465913 + our experience with customers.

Resize ASM disks in AWS (FG enabled cluster)

  1. Connect to AWS console https://console.aws.amazon.com
  2. On the left side -> under the section ELASTIC BLOCK STORE -> choose Volumes
  3. Choose necessary disk -> click Actions button -> choose Modify Volume -> change Size
    Please note that all data disks (not quorum disk) must be increased under the same diskgroup, otherwise ASM will not let you to have different sized disks.

Choose another data disks and repeat the same steps.

4. Run the following on database nodes via root user:

# for i in /sys/block/*/device/rescan; do echo 1 > $i; done

5. Check that disks have correct sizes:

# flashgrid-node

6. Connect to the ASM instance from any database node and run:

[grid@rac1 ~]$ sqlplus / as sysasm
SQL*Plus: Release 19.0.0.0.0 - Production on Fri Aug 23 10:17:50 2019
Version 19.4.0.0.0
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.4.0.0.0

SQL> alter diskgroup GRID resize all; 
Diskgroup altered.

UDEV rules for configuring ASM disks

Problem:

During my previous installations I used the following udev rule on multipath devices:

KERNEL=="dm-[0-9]*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="360050768028200a9a40000000000001c", NAME="oracleasm/asm-disk1", OWNER="oracle", GROUP="asmadmin", MODE="0660"

So to identify the exact disk I used PROGRAM option. The above script looks through `/dev/dm-*` devices and if any of them satisfy the condition, for example:

# scsci_id -gud /dev/dm-3
360050768028200a9a40000000000001c 

then device name will be changed to /dev/oracleasm/asm-disk1, owner:group to grid:asmadmin and permission to 0660

But on my new servers same udev rule was not working anymore. (Of course, it needs more investigation, but our time is really valuable and never enough and if we know another solution that works and is acceptable- let’s just use it)

Solution:

I used udevadm command to identify other properties of these devices and wrote new udev rule (to see all properties, just remove grep):

# udevadm info --query=property --name /dev/mapper/asm1 | grep DM_UUID
DM_UUID=mpath-360050768028200a9a40000000000001c

New udev rule looks like this:

# cat /etc/udev/rules.d/99-oracle-asmdevices.rules
ENV{DM_UUID}=="mpath-360050768028200a9a40000000000001c",  SUBSYSTEM=="block", NAME="oracleasm/asm-disk1", OWNER="grid", GROUP="asmadmin", MODE="0660"

Trigger udev rules:

# udevadm trigger

Verify that name, owner, group and permissions are changed:

# ll /dev/oracleasm/
total 0
brw-rw---- 1 grid asmadmin 253, 3 Jul 17 17:33 asm-disk1

ASM Diskstring default values

In case you have trouble getting the newly added disks to your ASM instance check for the ASM_DISKSTRING init paramter. If ASM_DISKSTRING is NOT SET then the following default is used

Default ASM_DISKSTRING per Operating System

Operating System Default            Search String
=======================================
Solaris (32/64 bit)                        /dev/rdsk/*
Windows NT/XP                          .orcldisk*
Linux (32/64 bit)                          /dev/raw/*

LINUX (ASMLIB)                         ORCL:*
LINUX (ASMLIB)                        /dev/oracleasm/disks/*

HPUX                                             /dev/rdsk/*
HP-UX(Tru 64)                            /dev/rdisk/*
AIX                                                 /dev/*

IF ASM_DISKSTRING is SET  then we should verify that the setting includes the disks that are needed to be seen by ASM.

 

Source from :  https://abchatur.wordpress.com/2011/09/01/asm-diskstring-default-values/

 

Add/Drop ASM disks to DISKGROUP on RAC(or Standalone)

Note: The steps are described for RAC, but you can easily guess what are the steps for the standalone database.

1. First of all find the disk or partition name, that should be added to the ASM.

fdisk -l

My disk partition name is /dev/sdi1.

2. Assign the disk to ORACLEASM.

–On node1

/etc/init.d/oracleasm createdisk DISK7 /dev/sdi1

3. Scan disks in ALL NODES and list them to check if is presented.

–On node1

/etc/init.d/oracleasm scandisks
/etc/init.d/oracleasm listdisks

–On node2

/etc/init.d/oracleasm scandisks
/etc/init.d/oracleasm listdisks

4. Change the environment to the grid infrastructure, by setting ORACLE_SID to +ASM and so on :

$ . oraenv
ORACLE_SID = [media1] ? +ASM1
The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/oracle

# Connect as an SYSASM

sqlplus / as sysasm

Note: If you don’t remember the password for the sysasm user see How to reset SYSASM password.

# Find the diskgroup name

SQL> select name from v$asm_diskgroup;

NAME
——————————
DATA01

# Increase power limit, if you want, to complete rebalance operation in a short time.

SQL>  alter system set asm_power_limit=10

# Indicate disks location by the parameter asm_diskstring

SQL> alter system set asm_diskstring=’ORCL:DISK*’

SQL> alter diskgroup DATA01 add disk ‘ORCL:DISK7’;

It will do the rebalance automatically.

# To drop the disk , do the following:

SQL >  alter diskgroup DATA01 drop disk DISK7;

It will rebalance first and then drops the disk automatically.

You can see the current operation in v$asm_operation view.

Note: Until the view v$asm_operation contains a record you are able to undrop the disks by the following way:

SQL> alter diskgroup DATA01 undrop disks;

If the operation is already completed , you are not able to undrop the disk . But you can re-add the disk , if you want.

That is all.

Install Oracle 11.2.0.3 with ASM on Centos 6.3

For RHEL 6, Oracle will provide ASMLib software and updates only when configured with a kernel distributed by Oracle. Oracle will not provide ASMLib packages for kernels distributed by Red Hat as part of RHEL 6. ASMLib updates will be delivered via Unbreakable Linux Network (ULN), which is available to customers with Oracle Linux support. ULN works with both Oracle Linux or Red Hat Linux installations, but ASMLib usage will require replacing any Red Hat kernel with a kernel provided by Oracle.

Because of the above announcement we use UDEV rules to prepare disks for ASM installation.

So let’s start.

1. Install required RPMs.

RPM names:

compat-libcap1-1.10-1.i686.rpm
compat-libcap1-1.10-1.x86_64.rpm
compat-libstdc++-33-3.2.3-69.el6.x86_64.rpm
elfutils-devel-0.152-1.el6.x86_64.rpm
elfutils-libelf-devel-0.152-1.el6.x86_64.rpm
gcc-c++-4.4.6-4.el6.x86_64.rpm
glibc-2.12-1.80.el6.i686.rpm
glibc-devel-2.12-1.80.el6.i686.rpm
libaio-0.3.107-10.el6.i686.rpm
libaio-devel-0.3.107-10.el6.x86_64.rpm
libattr-2.4.44-7.el6.i686.rpm
libcap-2.16-5.5.el6.i686.rpm
libgcc-4.4.6-4.el6.i686.rpm
libstdc++-devel-4.4.6-4.el6.x86_64.rpm
libtool-ltdl-2.2.6-15.5.el6.i686.rpm
ncurses-devel-5.7-3.20090208.el6.i686.rpm
ncurses-libs-5.7-3.20090208.el6.i686.rpm
nss-softokn-freebl-3.12.9-11.el6.i686.rpm
pdksh-5.2.14-30.x86_64.rpm
readline-6.0-4.el6.i686.rpm

If you have Centos installation disk, these RPMs should be locate there. Or you can download them from  http://rpm.pbone.net/

Note: if during installing pdksh-5.2.14-30.x86_64.rpm it says that package conflicts with ksh then you should erase ksh package and install pdksh, like this:

# rpm -qa | grep ksh
# rpm -e ksh-…
# rpm –ivh pdksh-5.2.14-30.x86_64.rpm

2. Configure Kernel:

# vi /etc/sysctl.conf

kernel.shmall = 2097152
kernel.shmmax = 982431744
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 6815744
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576

To make changes take effect:

# sysctl –p

Edit /etc/pam.d/login :

# vi /etc/pam.d/login

session required pam_limits.so

Edit /etc/security/limits.conf:

# vi /etc/security/limits.conf

oracle soft  nproc   2047
oracle hard  nproc   16384
oracle soft  nofile  1024
oracle hard  nofile  65536

grid   soft  nproc   2047
grid   hard  nproc   16384
grid   soft  nofile  1024
grid   hard  nofile  65536

Run the following to add lines in /etc/pam.d/login:

[root@orcl ~]# cat >> /etc/pam.d/login <<EOF
session required pam_limits.so
EOF

Run the following to add lines in /etc/profile:

[root@orcl ~]# cat >> /etc/profile <<EOF
if [ \$USER = "oracle" ] || [ \$USER = "grid" ]; then
if [ \$SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
EOF

Run the following to add lines in /etc/csh.login

[root@orcl ~]# cat >> /etc/csh.login <<EOF
if ( \$USER == "oracle" || \$USER == "grid" )
then
limit maxproc 16384
limit descriptors 65536
endif
EOF

Disable SELinux:

# /usr/sbin/getenforce
Enforcing

# /usr/sbin/setenforce 0

To make changes permanent, change /etc/sysconfig/selinux file by the following way:

cat /etc/sysconfig/selinux

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing – SELinux security policy is enforced.
#     permissive – SELinux prints warnings instead of enforcing.
#     disabled – No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
#     targeted – Targeted processes are protected,
#     mls – Multi Level Security protection.
SELINUXTYPE=targeted

To check the status again:

# /usr/sbin/getenforce
Disabled

3.  Creating OS groups and users.

#Creating groups for Grid Infrastructure

groupadd asmadmin
groupadd asmdba
groupadd asmoper

#Creating groups for Oracle Software

groupadd oinstall
groupadd dba
groupadd oper

#Creating user for Grid Infrastructure

useradd -g oinstall -G dba,asmadmin,asmdba,asmoper -d /home/grid grid

#Creating user for Oracle Software

useradd -g oinstall -G dba,oper,asmdba -d /home/oracle oracle

#Setting password for users

passwd grid
passwd oracle

4. Creating necessary directories

mkdir -p /u01/app/grid
mkdir -p /u01/app/11.2.0/grid
mkdir -p /u01/app/oracle
chown -R grid:oinstall /u01
chown oracle:oinstall /u01/app/oracle
chmod -R 775 /u01

5. Creating .bash_profile-s

#For Oracle user

su – oracle
vi .bash_profile

if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

ORACLE_SID=orcl; export ORACLE_SID

ORACLE_UNQNAME=orcl; export ORACLE_UNQNAME

JAVA_HOME=/usr/local/java; export JAVA_HOME

ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE

ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
export ORACLE_HOME

ORACLE_TERM=xterm; export ORACLE_TERM

NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"
export NLS_DATE_FORMAT

TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN

ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11

PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin
PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
PATH=${PATH}:/u01/app/common/oracle/bin
export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export LD_LIBRARY_PATH

CLASSPATH=$ORACLE_HOME/JRE
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export CLASSPATH

THREADS_FLAG=native; export THREADS_FLAG

export TEMP=/tmp
export TMPDIR=/tmp

umask 022

#For Grid user

su – grid
vi .bash_profile

if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

ORACLE_SID=+ASM; export ORACLE_SID

JAVA_HOME=/usr/local/java; export JAVA_HOME

ORACLE_BASE=/u01/app/grid; export ORACLE_BASE

ORACLE_HOME=/u01/app/11.2.0/grid; export ORACLE_HOME

ORACLE_TERM=xterm; export ORACLE_TERM

NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"; export NLS_DATE_FORMAT

TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN

ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11

PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin
PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
PATH=${PATH}:/u01/app/common/oracle/bin
export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export LD_LIBRARY_PATH

CLASSPATH=$ORACLE_HOME/JRE
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export CLASSPATH

THREADS_FLAG=native; export THREADS_FLAG

export TEMP=/tmp
export TMPDIR=/tmp

umask 022

 

6. Add disks for ASM

If your disk is not partitioned yet, partition it by fdisk utility.

At this point you will need SCSI identifier.

So if your disk is physical just run the following command to identify it:

# scsi_id -g -u -d /dev/sdb
36000c292dfddac7b8934d3293313098e

Or if you have virtual disk , you will need to set disk.EnableUUID parameter to TRUE to see this identifier:

Shutdown VM, go to the directory where VM files are stored and edit VMX file. Add the following line:

disk.EnableUUID = "TRUE"

Restart your VM and run the above command (scsi_id -g -u -d /dev/sdb)

We will use this identifier for the UDEV rules to set permissions and alias for the new device in /etc/udev/rules.d/50-udev.rules file.

# vi /etc/udev/rules.d/50-udev.rules

KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent",
RESULT=="36000c292dfddac7b8934d3293313098e", NAME="oracleasm/asm-disk1",
OWNER="oracle", GROUP="dba", MODE="0660"

Restart udev, by the following way:

# /sbin/start_udev
Starting udev:                                             [  OK  ]

Check that alias exists:

# ls -la /dev/oracleasm/*
brw-rw—- 1 oracle dba 8, 17 Feb 18 12:49 /dev/oracleasm/asm-disk1

7. Install grid

Go to the grid installation folder and run:

./runInstaller

Step 1:

Skip Software Updates.

Step 2:

Configure Oracle Grid Infrastructure for a Standalone Server.

Step 3:

Click Next.

Step 4:

Select the Change Discovery Path button and enter /dev/oracleasm.

There should appear /dev/oracleasm/asm-disk-1 and check it.

Type disk group name as DATA01.

Step 5:

Set the passwords for the SYS and ASMSNMP accounts:

Step 6:

ASM Database Administrator(OSDBA) Group : asmdba
ASM Instance Administration Operator(OSOPER) Group: asmoper
ASM Instance Administrator(OSASM) Group: asmadmin

Step 7:

Click Next.

Step 8:

Click Next.

Step 9:

Click Install.

After pop-upping the window , asking to run

      • /u01/app/oraInventory/orainstRoot.sh
  • /u01/app/oracle/product/11.2.0/db_1/root.sh

run these scripts one by one and click OK on pop-up window.

Note: if root.sh script shows the error like this:

…error while loading shared libraries: libcap.so.1: …

Then in your system libcap-1 and libcap-2 RPMs are missing, first install them.

8. Install database

Go to the database installation folder and run:

./runInstaller

Step 1:

I don’t want any more spam thanks.

Step 2:

Skip the updates.

Step 3:

Create and configure a database.

Step 4:

Server Class.

Step 5:

Single instance database installation.

Step 6:

Advanced install.

Step 7:

Choose the languages you want.

Step 8:

Enterprise Edition.

Step 9:

Choose the defaults.
Note: Grid and Database must be in the different folders.

Step 10:

General Purpose / Transaction Processing.

Step 11:

Write a database name.

Step 12:

Use Oracle Enterprise Manager Database Control for database management.

Step 13:

Use Automatic Storage Management.

Step 15:

Do not enable automated backups

Step 16:

Select the DATA01 diskgroup.

Step 17:

Set the passwords for the database.

Step 18:

Accept the defaults.

Database Administrator(OSDBA) Group: dba
Database Operator(OSOPER) Group: oper

Step 19:

Click Install to start the installer.

After the installation requires to run root.sh script, run it.

That is all.