Apache Derby: “ERROR 25502: An SQL data change is not permitted for a read-only connection, user or database.”

Problem:

During testing Oracle ZDM (Zero Downtime Migration), I’ve stopped ZDM host ungracefully. After that zdmservice was not able to start:

[zdmuser@primracq ~]$ /u01/app/zdmhome/bin/zdmservice start
...
CRS_ERROR:TCC-0004: The container was not able to start.
CRS_ERROR:One or more listeners failed to start. Full details will be found in the appropriate container log fileContext [/rhp] startup failed due to previous errors
...
Start operation could not start zdmservice.
zdmservice start failed…

Troubleshooting:

Check logs under [/rhp], in my case /u01/app/zdmbase/crsdata/primracq/rhp/logs/

[zdmuser@primracq ~]$ ll /u01/app/zdmbase/crsdata/primracq/rhp/logs/
 total 3748
-rw-r--r-- 1 zdmuser zdm   277774 Dec 17 19:19 catalina.2019-12-17.log
-rw-r--r-- 1 zdmuser zdm   186117 Dec 18 18:12 catalina.2019-12-18.log
-rw-r--r-- 1 root    root   24133 Dec 19 19:12 catalina.2019-12-19.log
-rw-r--r-- 1 root    root  541694 Dec 22 20:31 catalina.2019-12-22.log
-rw-r--r-- 1 zdmuser zdm    35666 Dec 23 12:40 catalina.2019-12-23.log
-rw-r--r-- 1 zdmuser zdm  2407571 Dec 23 12:40 catalina.out
-rw-r--r-- 1 zdmuser zdm    12541 Dec 23 12:40 derby.log
-rw-r--r-- 1 zdmuser zdm   273095 Dec 23 12:40 jwc_checker_stdout_err_0.log
-rw-r--r-- 1 zdmuser zdm      155 Dec 23 12:40 JWCStartEvent.log
-rw-r--r-- 1 zdmuser zdm     5083 Dec 17 19:19 localhost.2019-12-17.log
-rw-r--r-- 1 zdmuser zdm     5083 Dec 18 18:12 localhost.2019-12-18.log
-rw-r--r-- 1 root    root    5083 Dec 19 19:12 localhost.2019-12-19.log
-rw-r--r-- 1 root    root    5083 Dec 22 20:31 localhost.2019-12-22.log
-rw-r--r-- 1 zdmuser zdm    18708 Dec 23 12:40 localhost.2019-12-23.log

catalina.out showed the following error:

Internal Exception: java.sql.SQLException: An SQL data change is not permitted for a read-only connection, user or database.
...
Caused by: ERROR 25502: An SQL data change is not permitted for a read-only connection, user or database.

Solution:

Remove db.lck file under /u01/app/zdmbase/derbyRepo:

# rm -rf /u01/app/zdmbase/derbyRepo/db.lck

And start the service again.

Change Oracle DB name using NID

Details:

Environment: RAC
Source name: orclA
Target name: orcl

1. Make sure that you have a recoverable whole database backup.

2. In Real Application Cluster we need to set cluster_database parameter to false and mount an instance on only one node:

[oracle@primrac1 ~]$ sqlplus / as sysdba
SQL> alter system set cluster_database=false scope=spfile;

[oracle@primrac1 ~]$ srvctl stop database orclA
[oracle@primrac1 ~]$ sqlplus / as sysdba
SQL> startup mount;

3. From the 1st node run nid utility. Specify username with sysdba privilege, target database name, and SETNAME parameter to yes.

[oracle@primrac1 ~]$ . oraenv
ORACLE_SID = [oracle] ? orclA1
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@primrac1 ~]$ nid TARGET=SYS DBNAME=orcl SETNAME=YES
DBNEWID: Release 19.0.0.0.0 - Production on Mon Dec 16 16:54:36 2019
Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.
Password:
Connected to database ORCLA (DBID=3133348785)
Connected to server version 19.4.0
Control Files in database:
    +DATA/ORCLA/CONTROLFILE/current.290.1020357617
Change database name of database ORCLA to ORCL? (Y/[N]) => Y
Proceeding with operation
Changing database name from ORCLA to ORCL
     Control File +DATA/ORCLA/CONTROLFILE/current.290.1020357617 - modified
     Datafile +DATA/ORCLA/DATAFILE/system.283.102035744 - wrote new name
     Datafile +DATA/ORCLA/DATAFILE/sysaux.286.102035749 - wrote new name
     Datafile +DATA/ORCLA/DATAFILE/undotbs1.287.102035753 - wrote new name
     Datafile +DATA/ORCLA/DATAFILE/undotbs2.300.102035839 - wrote new name
     Datafile +DATA/ORCLA/DATAFILE/users.288.102035754 - wrote new name
     Datafile +DATA/ORCLA/TEMPFILE/temp.296.102035765 - wrote new name
     Control File +DATA/ORCLA/CONTROLFILE/current.290.1020357617 - wrote new name
     Instance shut down
 Database name changed to ORCL.
 Modify parameter file and generate a new password file before restarting.
 Succesfully changed database name.
 DBNEWID - Completed succesfully.

5. Change db_name parameter in the initialization parameter file:

[oracle@primrac1 dbs]$ . oraenv
ORACLE_SID = [orcl1] ? orclA1

[oracle@primrac1 dbs]$ sqlplus / as sysdba
SQL> startup nomount;
SQL> alter system set db_name=orcl scope=spfile;
SQL> alter system set cluster_database=true scope=spfile;
SQL> shut immediate;

6. Modify database name in srvctl:

[oracle@primrac1 dbs]$ srvctl modify database -db orclA -dbname orcl

7. Remove existing password file entry and create a new one:

[oracle@primrac1 dbs]$ srvctl modify database -db orclA -pwfile

[oracle@primrac1 dbs]$ orapwd  dbuniquename=orclA file='+DATA/ORCLA/PASSWORD/pwdorcl.ora'
Enter password for SYS:

8. Start the database using srvctl:

[oracle@primrac1 dbs]$ srvctl start database -db orclA

Depending on your needs you may also update instance names. If so, please make sure that you don’t have instance-specific parameters (e.g orclA1.instance_number), otherwise, it’s better to recreate the parameter file with the correct instance names.

I’ve shown you just a simple example of the database name change. In your specific case, there may be other things that depend on the database name.

Common usage examples of cluvfy utility

1. Discovering all available network interfaces and verifying the connectivity between the nodes in the cluster through those network interfaces:

$ cluvfy comp nodecon -n all -verbose

2. Check the reachability of specified nodes from a source node:

$ cluvfy comp nodereach -n all -verbose

3. Check the integrity of Oracle Cluster Registry (OCR) on all the specified nodes:

$ cluvfy comp ocr -n all -verbose

4. Verifying the integrity of the Oracle high availability services daemon on all nodes in the cluster:

$ cluvfy comp ohasd -n all -verbose

5. Comparing nodes:

$ cluvfy comp peer -n all -verbose

6. Verifying the software configuration on all nodes in the cluster for the Oracle Clusterware home directory:

$ cluvfy comp software -n all -verbose

7. Check the integrity of the cluster on all the nodes in the node list:

$ cluvfy comp clu -n all -verbose