Resize ASM disks in AWS (FG enabled cluster)

  1. Connect to AWS console https://console.aws.amazon.com
  2. On the left side -> under the section ELASTIC BLOCK STORE -> choose Volumes
  3. Choose necessary disk -> click Actions button -> choose Modify Volume -> change Size
    Please note that all data disks (not quorum disk) must be increased under the same diskgroup, otherwise ASM will not let you to have different sized disks.

Choose another data disks and repeat the same steps.

4. Run the following on database nodes via root user:

# for i in /sys/block/*/device/rescan; do echo 1 > $i; done

5. Check that disks have correct sizes:

# flashgrid-node

6. Connect to the ASM instance from any database node and run:

[grid@rac1 ~]$ sqlplus / as sysasm
SQL*Plus: Release 19.0.0.0.0 - Production on Fri Aug 23 10:17:50 2019
Version 19.4.0.0.0
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.4.0.0.0

SQL> alter diskgroup GRID resize all; 
Diskgroup altered.
Advertisement

FlashGrid SkyCluster Now Supports Oracle Database 19c

FlashGrid SkyCluster Version 19.06 now has full support of GI/DB 19c, which means using FG launcher tool (https://www.flashgrid.io/skycluster-for-aws/#launch , https://www.flashgrid.io/skycluster-for-azure/#launch , https://www.flashgrid.io/skycluster-for-gcp/#launch ) , you can setup multi-node Real Application Clusters in the cloud automatically in about 2 hours.

“Oracle 19c is a long-term support release from Oracle with extended support available through 2026”

https://www.kb.flashgrid.io/release-notes/cloud-provisioningFlashGrid

Oracle Golden Gate Setup on FlashGrid enabled clusters

Network Access:

  1. Setup VPC peering between sites.

2. Make sure that port UDP/4801 is open via the security group settings for each side.

Preparing the System:

  1. /etc/hosts file on each side must contain entries about other side:

10.30.0.5 rac1-hq-ext.example.com rac1-hq-ext
10.30.0.6 rac2-hq-ext.example.com rac2-hq-ext
10.30.0.4 racq-hq-ext.example.com racq-hq-ext
10.40.0.6 rac1-dr-ext.example.com rac1-dr-ext
10.40.0.5 rac2-dr-ext.example.com rac2-dr-ext
10.40.0.4 racq-dr-ext.example.com racq-dr-ext

2. On the first node of the source cluster, add entries in nodes section about target cluster in /etc/flashgrid-clan.cfg:

nodes = {‘rac1-hq’: {‘address’: ‘rac1-hq-ext’, ‘id’: 101, ‘role’: ‘database’},
‘rac2-hq’: {‘address’: ‘rac2-hq-ext’, ‘id’: 102, ‘role’: ‘database’},
‘racq-hq’: {‘address’: ‘racq-hq-ext’, ‘id’: 103, ‘role’: ‘quorum’},
‘rac1-dr’: {‘address’: ‘rac1-dr-ext’, ‘id’: 201, ‘role’: ‘database’},
‘rac2-dr’: {‘address’: ‘rac2-dr-ext’, ‘id’: 202, ‘role’: ‘database’},
‘racq-dr’: {‘address’: ‘racq-dr-ext’, ‘id’: 203, ‘role’: ‘quorum’}}

3. Modify /home/fg/.ssh/authorized_keys file on each cluster node and add public key of the other side.

4. Run the following command from the first node of the source cluster to deploy clan config:

 # flashgrid-clan-cfg deploy-config -f


Create shared folder for Golden Gate on ACFS

Create separate diskgroup called GGDG for Golden Gate software and create the ACFS-shared filesystem on it.

  1. Log on to the first node of the RAC cluster as the grid user.
  2. Create a volume:

$ sudo su – grid

$ asmcmd

ASMCMD> volcreate -G GGDG -s 10G ACFSGG

3. Determine the device name of the volume:

ASMCMD> volinfo –all

Diskgroup Name: GGDG

Volume Name: ACFSGG
Volume Device: /dev/asm/acfsgg-458
State: ENABLED
Size (MB): 10240
Resize Unit (MB): 512
Redundancy: MIRROR
Stripe Columns: 8
Stripe Width (K): 1024
Usage:
Mountpath:

4. Create the filesystem on the volume:

[grid@rac1-hq ~]$ mkfs -t acfs /dev/asm/acfsgg-458

5. As the root, create an empty directory and mount ACFS volume:

# mkdir /GG_HOME
# chmod 775 /GG_HOME
# chown oracle:oinstall /GG_HOME

Setup the file system to be automounted by Clusterware:

  1. Using srvctl add filesystem

# srvctl add filesystem -device /dev/asm/acfsgg-458 -path /GG_HOME -volume acfsgg -diskgroup GGDG -user oracle -fstype ACFS -description “ACFS for Golden Gate”

2. Start filesystem service:

# srvctl start filesystem -device /dev/asm/acfsgg-458

3. Do the same steps on the target cluster for creating ACFS-shared filesystem for GG.

Installing the GoldenGate software

  1. Download and install Golden Gate software on shared ACFS mount point. Software binaries (123012_fbo_ggs_Linux_x64_shiphome.zip) can be downloaded from http://www.oracle.com/technetwork/middleware/goldengate/downloads/index.html

2. Place downloaded software in /tmp/Install directory. Give Oracle Database owner the necessary permissions and install the software via oracle user:

# chown oracle:oinstall /tmp/Install/123012_fbo_ggs_Linux_x64_shiphome.zip

# su – oracle
$ cd /tmp/Install/

$ unzip 123012_fbo_ggs_Linux_x64_shiphome.zip
$ cd fbo_ggs_Linux_x64_shiphome/Disk1

$ ./runInstaller -silent -nowait -showProgress INSTALL_OPTION=ORA12c SOFTWARE_LOCATION=/GG_HOME/home_1 START_MANAGER=false MANAGER_PORT= DATABASE_LOCATION= INVENTORY_LOCATION=/u01/app/oraInventory UNIX_GROUP_NAME=oinstall

3. Do the same steps on the target cluster also.

Install Oracle Grid Infrastructure Standalone Agents

  1. Download the software from http://www.oracle.com/technetwork/database/database-technologies/clusterware/downloads/index.html
  2. Place downloaded software in /tmp/Install. Give grid user necessary permissions. From the first node of the cluster run the following via grid user:

# chown grid:dba /tmp/Install/xagpack81b.zip
# su – grid
$ cd /tmp/Install
$ unzip xagpack81b.zip
$ ./xag/xagsetup.sh –install –directory /u01/app/grid/xag –all_nodes
Installing Oracle Grid Infrastructure Agents on: rac1-hq
Installing Oracle Grid Infrastructure Agents on: rac2-hq
Done.

3. Do the same steps on the target cluster also.


Create application VIP

  1. From the first node of the target cluster as root create application VIP:

# $GRID_HOME/bin/appvipcfg create -network=1 -ip=192.168.1.250 -vipname=gg_vip_dr -user=root

2. Start VIP:

# crsctl start resource gg_vip_dr

3. Allow oracle user to start the vip:

# crsctl setperm resource gg_vip_dr -u user:oracle:r-x

Application VIP is necessary on the target server only. If you plan to have dual systems and each cluster can be a target, then create VIP on each side.

XAG Registration

  1. From the first node as oracle user create Golden Gate Agent.

On source:

$ /u01/app/grid/xag/bin/agctl add goldengate gg_replicate –gg_home /GG_HOME/home_1 –instance_type source –nodes rac1-hq,rac2-hq –filesystems ora.ggdg.acfsgg.acfs –databases ora.orcl.db –oracle_home /u01/app/oracle/product/12.2.0/dbhome_1

On target:

$ /u01/app/grid/xag/bin/agctl add goldengate gg_replicate –gg_home /GG_HOME/home_1 –instance_type target –nodes rac1-dr,rac2-dr  –vip_name gg_vip_dr –filesystems ora.ggdg.acfsgg.acfs –databases ora.orcl.db –oracle_home /u01/app/oracle/product/12.2.0/dbhome_1

ArchiveLog mode setup for source database

The steps to convert database to archivelog mode:

  1. Stop database if it is running:

$ srvctl stop database -db orcl

2. Do the following from to the first node only:

SQL> startup mount;

SQL> alter system set DB_RECOVERY_FILE_DEST_SIZE=10G;

SQL> alter system set db_recovery_file_dest=’+FRA’;

SQL> alter database archivelog;

SQL> alter database add supplemental log data;

SQL> shutdown immediate;

3. Start database

$ srvctl start database -db orcl

Schema setup on source and target

Set the following parameters on each side:

  1. Set enable_goldengate_replication parameter to true:

SQL> alter system set enable_goldengate_replication=true;

2. For integrated capture, set streams_pool_size parameter:

SQL> alter system set streams_pool_size=4G scope=spfile;

3. Create chemas on source and target databases:

SQL> create tablespace ggcw datafile ‘+DATA’ size 100m autoextend on next 5m maxsize unlimited;

SQL> create user ggcw identified by ggcw default tablespace ggcw temporary tablespace temp quota unlimited on ggcw;

SQL> create user ggcw identified by ggcw;
SQL> grant connect, resource to ggcw;
SQL> grant select any dictionary, select any table to ggcw;
SQL> grant create table to ggcw;
SQL> grant flashback any table to ggcw;
SQL> grant execute on dbms_flashback to ggcw;
SQL> grant execute on utl_file to ggcw;
SQL> grant create any table to ggcw;
SQL> grant insert any table to ggcw;
SQL> grant update any table to ggcw;
SQL> grant delete any table to ggcw;
SQL> grant drop any table to ggcw;

SQL> @/GG_HOME/home_1/marker_setup
Enter Oracle GoldenGate schema name:ggcw

SQL> @/GG_HOME/home_1/ddl_setup
Enter Oracle GoldenGate schema name:ggcw
SQL> @/GG_HOME/home_1/role_setup
Enter Oracle GoldenGate schema name:ggcw
SQL> grant ggs_ggsuser_role to ggcw;
SQL> @/GG_HOME/home_1/ddl_enable
SQL> @/GG_HOME/home_1/ddl_pin ggcw
SQL> @/GG_HOME/home_1/sequence.sql ggcw

 

Setup a basic configuration of OGG

  1. To be able to run GGSCI command create the following symbolic links in /GG_HOME/home_1 via oracle user on both sides:

$ cd /GG_HOME/home_1
$ ln -s /u01/app/oracle/product/12.2.0/dbhome_1/lib/libnnz12.so libnnz12.so
$ ln -s /u01/app/oracle/product/12.2.0/dbhome_1/lib/libclntsh.so.12.1 libclntsh.so.12.1
$ ln -s /u01/app/oracle/product/12.2.0/dbhome_1/lib/libons.so libons.so
$ ln -s /u01/app/oracle/product/12.2.0/dbhome_1/lib/libclntshcore.so.12.1 libclntshcore.so.12.1

2. Create subdirs from GGSCI command line interface on source and target:

[oracle@rac1-hq ~]$ . oraenv
ORACLE_SID = [orcl1] ?
ORACLE_HOME = [/home/oracle] ? /u01/app/oracle/product/12.2.0/dbhome_1
[oracle@rac1-hq ~]$ /GG_HOME/home_1/ggsci

GGSCI> create subdirs

3. Enable Supplemental logging on source database tables:

GGSCI> DBLOGIN USERID ggcw PASSWORD ggcw

GGSCI> ADD TRANDATA HR.*

4. Edit global parameters on source and target and create checkpoint table:

GGSCI> edit params ./GLOBALS

GGSCHEMA ggcw
CHECKPOINTTABLE ggcw.CKPTAB

GGSCI> ADD CHECKPOINTTABLE

5. Create manager parameter file on source and target:

GGSCI > edit params mgr

PORT 7809
AUTORESTART ER *, RETRIES 5, WAITMINUTES 1, RESETMINUTES 60
AUTOSTART ER *

6. Create Integrated and Classic extracts.

6.1 Integrated capture:

Make sure you have set the correct value for ORACLE_SID.

GGSCI> DBLOGIN USERID ggcw PASSWORD ggcw

GGSCI> register extract ext1 database

GGSCI> add extract ext1, integrated tranlog, begin now

GGSCI> ADD EXTTRAIL ./dirdat/rt, EXTRACT ext1, MEGABYTES 100

GGSCI> EDIT PARAMS ext1

EXTRACT ext1
USERID ggcw, PASSWORD ggcw
RMTHOST 192.168.1.250, MGRPORT 7809
RMTTRAIL ./dirdat/rt
TABLE hr.emp;

Note: The RMTHOST parameter must reference the resource VIP IP of the target.

6.2 Classic capture:

6.2.1 Please apply patch for bug 27379190 before using classic capture, see Doc ID 2360874.1

6.2.2 If archivelogs are located on ASM, then need to add the following entries in $ORACLE_HOME/network/admin/tnsnames.ora:

Cluster Node 1:

ASM =
(ADDRESS=(PROTOCOL=BEQ)
(PROGRAM=/u01/app/12.2.0/grid/bin/oracle)
(ARGV0=oracle+ASM1)
(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=BEQ)))’)
(ENVS=’ORACLE_HOME=/u01/app/12.2.0/grid,ORACLE_SID=+ASM1′))

Cluster Node 2:

ASM =
(ADDRESS=(PROTOCOL=BEQ)
(PROGRAM=/u01/app/12.2.0/grid/bin/oracle)
(ARGV0=oracle+ASM2)
(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=BEQ)))’)
(ENVS=’ORACLE_HOME=/u01/app/12.2.0/grid,ORACLE_SID=+ASM2′))

6.2.3 Set the correct value for ORACLE_SID and login to the database using GGSCI command line interface:

GGSCI> DBLOGIN USERID ggcw PASSWORD ggcw

6.2.4 Add extract with number of threads:

GGSCI> ADD EXTRACT ext2, TRANLOG, THREADS 2, BEGIN NOW

GGSCI> ADD EXTTRAIL ./dirdat/et, EXTRACT ext2, MEGABYTES 100

GGSCI> EDIT PARAMS ext2

EXTRACT ext2
USERID ggcw, PASSWORD ggcw
TRANLOGOPTIONS ASMUSER SYS@ASM, ASMPASSWORD MyPassword2017
RMTHOST 192.168.1.250, MGRPORT 7809
RMTTRAIL ./dirdat/et
TABLE hr.sal;

6.2.5 Create replicats on target :

GGSCI> DBLOGIN USERID ggcw, PASSWORD ggcw

GGSCI> ADD REPLICAT rep1, EXTTRAIL ./dirdat/rt

GGSCI> ADD REPLICAT rep2, EXTTRAIL ./dirdat/et

At this point if the ADD REPLICAT command fails with the following error “RROR: No checkpoint table specified for ADD REPLICAT” simply exit that GGSCI session and then start another one before issuing ADD REPLICAT. The ADD REPLICAT command fails if issued from the same session where the GLOBALS file was created.

GGSCI> EDIT PARAMS rep1

REPLICAT rep1
ASSUMETARGETDEFS
USERID ggcw, PASSWORD ggcw
MAP hr.emp, TARGET hr.emp;

GGSCI> EDIT PARAMS rep2

REPLICAT rep2
ASSUMETARGETDEFS
USERID ggcw, PASSWORD ggcw
MAP hr.sal, TARGET hr.sal;

7. Start golden gate on source and target servers via oracle user:

$ /u01/app/grid/xag/bin/agctl start goldengate gg_replicate