ORA-15120: ASM file name ‘ORA-27090: Unable to reserve kernel resources f’ does not begin with the ASM prefix character

Problem:

The customer created 36 databases on the same server and while creating the 37th using dbca got the following error:

Reason:

fs.aio-max-nr value was set too low in /etc/sysctl.conf. In general, value 3145728 that was set in their case, suits many environments, but if the number of databases on the server increases then this parameter should be adjusted accordingly.

Solution:

The formula used while calculating the value for this parameter is the following:

aio-max-nr = no of process per DB * no of databases * 4096

In their case, the number of processes per DB was 1000, the number of databases that planned to be created was 80. Based on the above value should be:

aio-max-nr = 327680000
  1. Add/update value in /etc/sysctl.conf:
# vim /etc/sysctl.conf

fs.aio-max-nr = 327680000


2. Run /sbin/sysctl -p to immediately enforce the changes:

# sysctl -p 

Delete already created files and recreate the database, it will succeed this time.

Advertisement

RMAN spread a backup job between many RAC instances in parallel to increase throughput

There are two options to allocate RMAN channels on different RAC instances to increase the throughput.

I will start with the option, that assures all RAC instances get one channel. Regarding the other option, it does load balance but in a random way, so with a small number of channels, you may see that all of them are allocated in one instance. So you decide which option is better for you.

The test is done on a 2-node cluster.

  1. Configure parallelism and two channels in RMAN. Indicate a connect string, one per instance:
$ RMAN target /
RMAN> CONFIGURE DEVICE TYPE disk PARALLELISM 2;
RMAN> CONFIGURE CHANNEL 1 DEVICE TYPE DISK CONNECT 'sys/Oracle123@ORCL1 as sysdba';
RMAN> CONFIGURE CHANNEL 2 DEVICE TYPE DISK CONNECT 'sys/Oracle123@ORCL2 as sysdba';

2. Define ORCL1 and ORCL2 aliases on each database node under $ORACLE_HOME/network/admin/tnsnames.ora:

ORCL1=
(DESCRIPTION=
           (ADDRESS= (PROTOCOL=tcp) (HOST=rac1.example.com) (PORT=1522))
           (CONNECT_DATA =
              (SERVER = DEDICATED)
              (SERVICE_NAME = orcl)
           )
      )
 
ORCL2=
(DESCRIPTION=
           (ADDRESS= (PROTOCOL=tcp) (HOST=rac2.example.com) (PORT=1522))
           (CONNECT_DATA =
              (SERVER = DEDICATED)
              (SERVICE_NAME = orcl)
           )
      )

Please note, in my case 1522 is a local listener port.

3. Run backup:

RMAN> backup database;
 
Starting backup at 22-MAR-22
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=507 instance=orcl1 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=753 instance=orcl2 device type=DISK
channel ORA_DISK_2: SID=753 instance=orcl2 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA/ORCL/DATAFILE/system.257.1098460673
input datafile file number=00004 name=+DATA/ORCL/DATAFILE/undotbs1.259.1098460743
channel ORA_DISK_1: starting piece 1 at 22-MAR-22
channel ORA_DISK_2: starting full datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00003 name=+DATA/ORCL/DATAFILE/sysaux.258.1098460717
input datafile file number=00005 name=+DATA/ORCL/DATAFILE/undotbs2.265.1098461311
input datafile file number=00007 name=+DATA/ORCL/DATAFILE/users.260.1098460743

As you see, two files were backed up by the 1st channel (1st instance) and the other three files by the 2nd channel (2nd instance).

Now let’s explain another possible variant:

  1. Configure one TNS string with load balance parameter:
ORCL_BALANCE=
     (DESCRIPTION=
           (TRANSPORT_CONNECT_TIMEOUT=3) (RETRY_COUNT=6)(LOAD_BALANCE=on)
           (ADDRESS= (PROTOCOL=tcp) (HOST=rac1.example.com) (PORT=1522))
           (ADDRESS= (PROTOCOL=tcp) (HOST=rac2.example.com) (PORT=1522))
           (CONNECT_DATA =
              (SERVER = DEDICATED)
              (SERVICE_NAME = orcl)
           )
      )

Both node addresses are defined and Oracle will pick each address randomly.

2. Configure parallelism and one channel with ORCL_BALANCE string:

Please note, I did this test case on the same server where I’ve already defined CHANNEL 1 and CHANNEL 2, so I had to clear them:

RMAN> CONFIGURE CHANNEL DEVICE TYPE DISK clear;
RMAN> CONFIGURE CHANNEL 1 DEVICE TYPE DISK clear;
RMAN> CONFIGURE CHANNEL 2 DEVICE TYPE DISK clear;

Define channel and parallelism:

RMAN> CONFIGURE DEVICE TYPE disk PARALLELISM 2;
RMAN> CONFIGURE CHANNEL 1 DEVICE TYPE DISK CONNECT 'sys/Oracle123@ORCL_BALANCE as sysdba';

RMAN> backup database;
 
Starting backup at 22-MAR-22
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=752 instance=orcl1 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=138 instance=orcl1 device type=DISK

Both channels were allocated on orcl1. Try one more time, or better configure parallelism 3, CHANNEL parameter is already defined and it is permanent until changed:

RMAN>  CONFIGURE DEVICE TYPE disk PARALLELISM 3;
 
RMAN> backup database;
 
Starting backup at 22-MAR-22
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=752 instance=orcl1 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=138 instance=orcl1 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=753 instance=orcl2 device type=DISK

As you see it really did a random choice. But two channels were allocated on the 1st node and the last one on the 2nd node.

I think here random algorithm is not a good option, but you better know which variant is appropriate in your case.

OPATCHAUTO-72050: System instance creation failed, cluvfy fails Verifying ‘/tmp/’ …FAILED (PRVF-7546)

Problem:

While running cluvfy we get the following error:

[grid@rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -verbose
...
Failures were encountered during execution of CVU verification request "stage -pre crsinst".
...
Verifying '/tmp/' ...FAILED
Cannot run program "/usr/bin/scp": error=13, Permission denied

You should not continue patching when runcluvfy fails, strongly recommended to solve all failed items and only after that run opatchauto. But if you do insist to run patching then you will get the following:

[root@rac1 33509923]# /u01/app/19.3.0/grid/OPatch/opatchauto apply /u01/app/patchinstall/33509923 -oh /u01/app/19.3.0/grid
OPatchauto session is initiated at Sun Mar 20 06:27:17 2022

System initialization log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchautodb/systemconfig2022-03-20_06-27-42AM.log.
...
OPATCHAUTO-72050: System instance creation failed.
OPATCHAUTO-72050: Failed while retrieving system information.
OPATCHAUTO-72050: Please check log file for more details.

Reason:

The issue in our case was that /usr/bin/scp did not have correct permissions. It had 600 while it should have had 755. Why this happened? Don’t really know… it should not be happening.

Solution:

Set correct permissions on both nodes for scp binary using the following way:

# chmod 755 /bin/scp

Retry patching, in our case it was successful:

ora.storage fails, Error 4 querying length of attr ASM_DISCOVERY_ADDRESS, ORA-01017

Problem:

CRS on the 1st node is able to start, but not on the 2nd node.

CRS on the 2nd node hangs and later fails:

CRS-2672: Attempting to start 'ora.storage' on 'rac2'
ORA-01017: invalid username/password; logon denied
CRS-5055: unable to connect to an ASM instance because no ASM instance is running in the cluster

during that time CRS alert.log shows:

2022-03-15 20:15:23.722 [ORAROOTAGENT(63477)]CRS-5019: All OCR locations are on ASM disk groups [GRID], and none of these disk groups are mounted. Details are at "(:CLSN00140:)" in "/u01/app/grid/diag/crs/rac2/crs/trace/ohasd_orarootagent_root.trc".

ohasd_orarootagent_root.trc shows:

2022-03-15 20:23:35.108 : USRTHRD:1769867008: [     INFO] {0:5:3} [ora.storage] 9788 Error 4 querying length of attr ASM_DISCOVERY_ADDRESS

2022-03-15 20:23:35.110 : USRTHRD:1769867008: [     INFO] {0:5:3} [ora.storage] 9788 Error 4 querying length of attr ASM_STATIC_DISCOVERY_ADDRESS

2022-03-15 20:23:35.136 : USRTHRD:1769867008: [     INFO] {0:5:3} [ora.storage] 9506 Error 4 opening dom root in 0x7fa3100013a0

Reason:

Either the password file is corrupted or it does not exist. In our case, GRID diskgroup was created after clearing disk headers and forgot to copy ASM password file.

Solution:

  1. If you have ASM password file backup, then you can simply place it to the asm diskgroup:
$ asmcmd pwcopy --asm /tmp/asm_passwordfile +GRID/orapwASM -f

and stop/start CRS.

2. If you don’t have password file backup, you need to create a new one and add necessary users into it:

[grid@rac1 ~]$ asmcmd pwcreate --asm +GRID/orapwasm -f
Enter password: **********

Check existing users:

[grid@rac1 ~]$ asmcmd lspwusr
Username sysdba sysoper sysasm
SYS TRUE TRUE FALSE

Add necessary users and grant permissions:

$ asmcmd orapwusr --grant sysasm SYS
$ asmcmd orapwusr --add ASMSNMP
Enter password: *********
$ asmcmd orapwusr --grant sysdba ASMSNMP

Check permissions again:

$ asmcmd lspwusr
Username sysdba sysoper sysasm
     SYS   TRUE    TRUE   TRUE
 ASMSNMP   TRUE   FALSE  FALSE

Find out the user name and password for CRSD to connect, GI uses internal user CRSUSER__ASM_001 with an internally generated password to access ASM during startup:

Find the string SYSTEM.ASM.CREDENTIALS.USERS.CRSUSER__ASM_001 in the following output and save. ORATEXT value:

# ocrdump -stdout | less
...
[SYSTEM.ASM.CREDENTIALS.USERS.CRSUSER__ASM_001]
ORATEXT : d68aec9585136fa8ff8f79f483e4ae64:grid
SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION : PROCR_NONE, USER_NAME : grid, GROUP_NAME : oinstall}

Query password for GUID-user. GUID will be different in your case. Retrieve value from your output:

# crsctl get credmaint -path /ASM/Self/d68aec9585136fa8ff8f79f483e4ae64 -credtype userpass -id 0 -attr passwd -local
mB28wSM4AVFAVEYamUIvrMjEo2Nfa

Add this user to ASM password file:

$ asmcmd orapwusr --add CRSUSER__ASM_001
>>>>> provide <password> you retrieved earlier

Add necessary credentials to this user:

$ asmcmd orapwusr --grant sysdba CRSUSER__ASM_001
$ asmcmd orapwusr --grant sysasm CRSUSER__ASM_001

Check the list again:

$ asmcmd lspwusr
        Username sysdba sysoper sysasm
             SYS   TRUE    TRUE   TRUE
         ASMSNMP   TRUE   FALSE  FALSE
CRSUSER__ASM_001   TRUE   FALSE   TRUE

Stop/Start CRS on the remaining node.

Oracle 12c relink resets oracle group to oinstall, while 19c sets asmadmin

Problem:

In a mixed environment when you have 19c GI, but several versions of Oracle RDBMS home (19c, 12c), initially during 12c database restore you may encounter the following error:

ORA-15001: diskgroup "DATA" does not exist or is not mounted
ORA-15040: diskgroup is incomplete

There are several reasons for the above error and one of them is when the $ORACLE_HOME/bin/oracle binary under rdbms home does not have group asmadmin. After identifying it, you try to change permissions to oracle:asmadmin and but after a while, you may notice that the group is reset to oinstall. It can happen after patching or after running relink manually. So let’s see test case:

[oracle@rac1 lib]$ ll /u01/app/oracle/product/12.1.0/dbhome_1/bin/oracle
-rwsr-s--x 1 oracle asmadmin 323613256 Feb 19 17:00 /u01/app/oracle/product/12.1.0/dbhome_1/bin/oracle

Run relink:

[oracle@rac1 lib]$ make -f ins_rdbms.mk ioracle

Check permissions again:

[oracle@rac1 lib]$ ll /u01/app/oracle/product/12.1.0/dbhome_1/bin/oracle
-rwsr-s--x 1 oracle oinstall 323613256 Feb 19 17:03 /u01/app/oracle/product/12.1.0/dbhome_1/bin/oracle

If you do the same for 19c home, group is still asmadmin.

Reason:

ins_rdbms.mk script is different in 19c and 12c, comparing ioracle section:

12c:

ioracle: preinstall $(ORACLE)
        -$(NOT_EXIST) $(ORACLE_HOME)/bin/oracle ||\
           mv -f $(ORACLE_HOME)/bin/oracle $(ORACLE_HOME)/bin/oracleO
        -mv $(ORACLE_HOME)/rdbms/lib/oracle $(ORACLE_HOME)/bin/oracle
        -chmod 6751 $(ORACLE_HOME)/bin/oracle

19c:

ioracle: preinstall $(ORACLE)
        -$(RMF) $(ORACLE_HOME)/bin/oracle
        -mv $(ORACLE_HOME)/rdbms/lib/oracle $(ORACLE_HOME)/bin/oracle
        -chmod 6751 $(ORACLE_HOME)/bin/oracle

        -(if [ ! -f $(ORACLE_HOME)/bin/crsd.bin ]; then \
            getcrshome="$(ORACLE_HOME)/srvm/admin/getcrshome" ; \
            if [ -f "$$getcrshome" ]; then \
                crshome="`$$getcrshome`"; \
                if [ -n "$$crshome" ]; then \
                    if [ $$crshome != $(ORACLE_HOME) ]; then \
                        oracle="$(ORACLE_HOME)/bin/oracle"; \
                        $$crshome/bin/setasmgidwrap oracle_binary_path=$$oracle; \
                    fi \
                fi \
            fi \
        fi\
        );

As you see ioracle section in 12c does not contain the last IF statement which sets oracle binary group to asmadmin (using setasmgidwrap)

Solution:

Unfortunately, I cannot recommend adding that if statement in 12c ins_rdbms.mk although I did it in my test environment and works as expected.
After running make -f ins_rdbms.mk ioracle for 12c the group is set to asmadmin.

It is better to ask Oracle about this change and do only after you get an approval from them.

The only non-harmful recommendation from my side would be to have permission change script and run after every patch or relink:

# chown oracle:asmadmin  /u01/app/oracle/product/12.1.0/dbhome_1/bin/oracle
​# chmod 6751  /u01/app/oracle/product/12.1.0/dbhome_1/bin/oracle
# ll   /u01/app/oracle/product/12.1.0/dbhome_1/bin/oracle

If you still decide to add IF statement as I did, please make sure there is TAB instead of spaces before IF clause. Otherwise, you will get the following error and you can easily understand which line should be corrected.

[oracle@rac1 lib]$ make -f ins_rdbms.mk ioracle
ins_rdbms.mk:120: *** missing separator (did you mean TAB instead of 8 spaces?).  Stop.

Useful tools for analyzing OS diagnostic data

Apply GI patch 19.14 (33509923) on GI 19.3

Current environment:

$ $ORACLE_HOME/OPatch/opatch lspatches
29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399)
29517247;ACFS RELEASE UPDATE 19.3.0.0.0 (29517247)
29517242;Database Release Update : 19.3.0.0.190416 (29517242)
29401763;TOMCAT RELEASE UPDATE 19.0.0.0.0 (29401763)

1. You must use the OPatch utility version 12.2.0.1.28 or later to apply this patch

Download p6880880_190000_Linux-x86-64.zip – OPatch 12.2.0.1.28 for DB 19.0.0.0.0 (Nov 2021), or later.

Replace existing opatch with the new one:

# export ORACLE_HOME=/u01/app/19c/grid
# rm -rf $ORACLE_HOME/OPatch
# su - grid 
$ export ORACLE_HOME=/u01/app/19c/grid
$ unzip -o /u01/swtmp/p6880880_190000_Linux-x86-64.zip -d $ORACLE_HOME

$ $ORACLE_HOME/OPatch/opatch version
OPatch Version: 12.2.0.1.28

2. Unzip the patch 33509923 as the Grid home owner:

# su - grid
$ cd /u01/swtmp
$ unzip p33509923_190000_Linux-x86-64.zip

3. Determine whether any currently installed one-off patches conflict with this patch 33509923 as follows:

# su - grid

$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/swtmp/33509923/33515361

Oracle Home       : /u01/app/19c/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/19c/grid/oraInst.loc
OPatch version    : 12.2.0.1.28
OUI version       : 12.2.0.7.0
Log file location : /u01/app/19c/grid/cfgtoollogs/opatch/opatch2022-01-31_11-58-55AM_1.log

Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.

$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/swtmp/33509923/33529556

$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/swtmp/33509923/33534448

$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/swtmp/33509923/33239955

$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/swtmp/33509923/33575402

In case conflict is detected, stop the patch installation and contact Oracle Support Services, otherwise continue.

4. Stop database instances running on that server.

5. Apply patch using opatchauto:

# cd /u01/swtmp/33509923/
# export ORACLE_HOME=/u01/app/19c/grid/
# export PATH=$ORACLE_HOME/bin:$PATH
# $ORACLE_HOME/OPatch/opatchauto apply
...
==Following patches were SUCCESSFULLY applied:

Patch: /u01/swtmp/33509923/33239955
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2022-02-03_11-33-44AM_1.log

Patch: /u01/swtmp/33509923/33515361
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2022-02-03_11-33-44AM_1.log

Patch: /u01/swtmp/33509923/33529556
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2022-02-03_11-33-44AM_1.log

Patch: /u01/swtmp/33509923/33534448
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2022-02-03_11-33-44AM_1.log

Patch: /u01/swtmp/33509923/33575402
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2022-02-03_11-33-44AM_1.log

Check current version:

$ $ORACLE_HOME/OPatch/opatch lspatches
33575402;DBWLM RELEASE UPDATE 19.0.0.0.0 (33575402)
33534448;ACFS RELEASE UPDATE 19.14.0.0.0 (33534448)
33529556;OCW RELEASE UPDATE 19.14.0.0.0 (33529556)
33515361;Database Release Update : 19.14.0.0.220118 (33515361)
33239955;TOMCAT RELEASE UPDATE 19.0.0.0.0 (33239955)

6. After finishing patch it is recommened to backup clusterware components:

The home is not clean. This home cannot be used since there was a failed OPatch execution in this home. Use a different home to proceed

Problem:

While installing Oracle software using applyRU:

$ /u01/app/oracle/product/19.3.0/dbhome_1/runInstaller -silent -applyRU /u01/swtmp/32895426/32904851/ -responseFile /opt/rsp/db_19.3_EE_swonlyinstall.rsp -ignorePrereqFailure -waitforcompletion

Got an error:

ERROR: The home is not clean. This home cannot be used since there was a failed OPatch execution in this home. Use a different home to proceed.

Solution:

If a previous attempt to install Oracle software using applyRU or applyOneOffs option failed, the home becomes unusable. Before retrying the process, you need to clean up the home first:

$ rm -rf /u01/app/oracle/product/19.3.0/dbhome_1/*

$ unzip -o /u01/swtmp/LINUX.X64_193000_db_home.zip -d /u01/app/oracle/product/19.3.0/dbhome_1

As a best practice I always have a new Opatch, so need to unzip it again:

$ unzip -o /u01/swtmp/p6880880_190000_Linux-x86-64.zip -d /u01/app/oracle/product/19.3.0/dbhome_1

Rerun the command:

$ /u01/app/oracle/product/19.3.0/dbhome_1/runInstaller -silent -applyRU /u01/swtmp/32895426/32904851/ -responseFile /opt/rsp/db_19.3_EE_swonlyinstall.rsp -ignorePrereqFailure -waitforcompletion
Preparing the home to patch...
Applying the patch /u01/swtmp/32895426/32904851/...

After GI upgrade flashgrid-cluster shows wrong diskgroup status

Problem:

I have upgraded Oracle GI from 12c to 19c. ASM is up and diskgroups are mounted, but flashgrid-cluster still shows that diskgroups are not mounted on any node:

[root@rac1 ~]# flashgrid-cluster
FlashGrid 21.8.67.66809 #707efdc10a212421dbd3737ef8e693ae14c15964
License: Active, Marketplace
Licensee: Flashgrid Inc.
Support plan: 24x7
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
FlashGrid running: OK
Clocks check: OK
Configuration check: OK
Network check: OK

Querying nodes: rac1, rac2, racq ...

Cluster Name: mariGIMR
Cluster status: Warning
-------------------------------------------------------------
Node  Status   ASM_Node  Storage_Node  Quorum_Node  Failgroup
-------------------------------------------------------------
rac1  Warning  Yes       Yes           No           RAC1
rac2  Warning  Yes       Yes           No           RAC2
racq  Warning  No        No            Yes          RACQ
-------------------------------------------------------------
-----------------------------------------------------------------------------
GroupName  Status  Mounted  Type  TotalMiB  FreeMiB  OfflineDisks  LostDisks  Resync  ReadLocal  Vote
-----------------------------------------------------------------------------
GRID     Warning  No  N/A   N/A   N/A    N/A    N/A  N/A     N/A        N/A
-----------------------------------------------------------------------------

Solution:

After upgrade, you should restart flashgrid_asm service on all nodes, it does not require downtime:

[root@rac1 ~]# systemctl restart flashgrid_asm
[root@rac2 ~]# systemctl restart flashgrid_asm

Check the status again:

...
-----------------------------------------------------------------------------
GroupName  Status  Mounted   Type    TotalMiB  FreeMiB  OfflineDisks  LostDisks  Resync  ReadLocal  Vote
-----------------------------------------------------------------------------
GRID    Good    AllNodes  NORMAL  204800    154336   0   0  No Enabled    3/3
-----------------------------------------------------------------------------

THE SPECIFIED LOGFILE BLOCKSIZE (512) IS LESS THAN THE MEDIA SECTOR SIZE ON DISK (4096)

Problem:

Alert log contains:

WARNING! THE SPECIFIED LOGFILE BLOCKSIZE (512) IS LESS THAN THE MEDIA SECTOR SIZE ON DISK (4096). LOGFILE WRITES ARE NOT SECTOR ALIGNED. THIS MAY LEAD TO DEGRADED PERFORMANCE DUE TO READ-MODIFY-WRITE OPERATION

Reason/Reproduce:

The redo log groups have the default block size 512. So as the error mentions, the size of the redo log group member is less than the sector size attribute for the diskgroup.

This is reproducable when disk physical block size is 4096. In AWS environment, for example, r5b instances has this size for attached disks (this may change in future, need to be checked while reading this post).

1. Make sure attached disks has physical size 4096

[root@rac1 ~]# find /sys/block/*/queue -name physical_block_size | while read f ; do echo "$f $(cat $f)" ; done

/sys/block/nvme0n1/queue/physical_block_size 4096
/sys/block/nvme1n1/queue/physical_block_size 4096
/sys/block/nvme2n1/queue/physical_block_size 4096
/sys/block/nvme3n1/queue/physical_block_size 4096
/sys/block/nvme4n1/queue/physical_block_size 4096
/sys/block/nvme5n1/queue/physical_block_size 4096

2. Create diskgroup with the sector size 4096

CREATE DISKGROUP "DG4K" NORMAL REDUNDANCY                                                
 FAILGROUP "RAC1" DISK '/dev/flashgrid/rac1.xvdbd' NAME "RAC1$XVDBD" SIZE 10240M 
 FAILGROUP "RAC2" DISK '/dev/flashgrid/rac2.xvdbd' NAME "RAC2$XVDBD" SIZE 10240M 
 QUORUM FAILGROUP "RACQ" DISK '/dev/flashgrid/racq.xvdbz' NAME "RACQ$XVDBZ"      
 ATTRIBUTE 'au_size' = '4M', 'compatible.asm' = '19.0.0.0', 'compatible.rdbms' = '19.0.0.0', 'sector_size'='4096';

Make sure sector size is 4096 for this diskgroup:

SQL> select name,sector_size from v$asm_diskgroup;

NAME			       SECTOR_SIZE
------------------------------ -----------
DATA				       512
GRID				       512
DG4K				      4096
FRA				       512

Check log group block size:

SQL> select group#, blocksize from v$log;

    GROUP#  BLOCKSIZE
---------- ----------
	 1	  512
	 2	  512
	 3	  512
	 4	  512

Add a new logfile member on DG4K:

SQL> alter database add logfile member '+DG4K' to group 4;

Database altered.

Check alert log:

2021-12-09T15:32:04.533204+00:00
WARNING! THE SPECIFIED LOGFILE BLOCKSIZE (512) IS LESS THAN THE MEDIA SECTOR SIZE ON DISK (4096). LOGFILE WRITES ARE NOT SECTOR ALIGNED. THIS MAY LEAD TO DEGRADED PERFORMANCE DUE TO READ-MODIFY-WRITE OPERATIONS.
2021-12-09T15:32:21.080534+00:00
Completed: alter database add logfile member '+DG4K' to group 4

Solution:

You should create a new redo log group (not member, but group) with block size=4K:

SQL> alter database add logfile group 5 ('+DG4K') size 512M blocksize 4096;