Upgrade 12.1.0.2 to 12.2.0.1 fails at 109 phase with “Unexpected error encountered”

We were trying to upgrade our database one week ago and it failed with the following error:

Serial Phase #:108 [ORCL] Files:1 Time: 1s 
******************* Migration ****************** 
Serial Phase #:109 [ORCL] Files:1 wait_for_completion: unexpected error in next_proc() 
catconExec: unexpected error in wait_for_completions 

Unexpected error encountered in catconExec; exiting 

Unexpected error encountered in catctlMain; Error Stack Below; exiting 
Died at /u01/app/oracle/product/12.2.0/db/rdbms/admin/catctl.pl line 7822. 
at /u01/app/oracle/product/12.2.0/db/rdbms/admin/catctl.pl line 7822. 
main::catctlDie("\x{a}Unexpected error encountered in catconExec; exiting\x{a} 2") called at /u01/app/oracle/product/12.2.0/db/rdbms/admin/catctl.pl line 4556 
main::catctlExecutePhaseFiles(109, 1, undef, undef, undef) called at /u01/app/oracle/product/12.2.0/db/rdbms/admin/catctl.pl line 1862 
main::catctlRunPhase(109, 1, undef, undef, undef) called at /u01/app/oracle/product/12.2.0/db/rdbms/admin/catctl.pl line 2006 
main::catctlRunPhases(0, 116, 116, undef, undef, undef) called at /u01/app/oracle/product/12.2.0/db/rdbms/admin/catctl.pl line 2171 
main::catctlRunMainPhases() called at /u01/app/oracle/product/12.2.0/db/rdbms/admin/catctl.pl line 1341 
main::catctlMain() called at /u01/app/oracle/product/12.2.0/db/rdbms/admin/catctl.pl line 1256 
eval {...} called at /u01/app/oracle/product/12.2.0/db/rdbms/admin/catctl.pl line 1254

We asked Oracle Support. It seemed they did not have such situation before.
12.2 is the newest version and any existing bugs are not known at this time.

We postponed upgrade for the next week after upgrade failure. And I started to read all the documentation steps carefully. I almost learned by heart 😀

Documentation id where the upgrade steps are written is  2173141.1

I will write down upgrade steps and the error that happened at the phase 109.
Then let’s correct an error and finish upgrade successfully.

================ Preupgrade steps

0. Copy orapwORCL, sqlnet.ora, tnsnames.ora, listener.ora files from old home to new home.

  1. Execute Preupgrade script from source home
    export ORACLE_HOME=/u01/app/oracle/product/12.1.0/db
    export PATH=/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/oracle/bin:/u01/app/oracle/product/12.1.0/db/bin
    $ORACLE_HOME/jdk/bin/java -jar /u01/app/oracle/product/12.2.0/db/rdbms/admin/preupgrade.jar TERMINAL TEXT

    It will generate two scripts pre/post fixups.

  2. Run preupgrade fixup , it will not correct all problems. Some steps must be done manually.
    sqlplus /  as sysdba
    @?/rdbms/admin/preupgrade_fixups.sql
  3. In documentation there is written to backup whole database, but our database is 11TB in size, so backing up before upgrade is not an option.
    But if you read doc carefully , there is written that you may make schema tablespaces offline(read only) which means that it is not necessary to backup schema tablesapces, because they are not changed during upgrade.
    So we can simply shutdown cleanly (normal, immediate, transactional)  our database and cold copy system, sysaux datafiles to another safe location. You must also copy one of the controlfile.
    Copying undo tablespace datafiles is not necessary, you are still able to recover database when it is cleanly shutdown. So if you backup undo , you may be saved with extra steps that is necessary to recover database without undo.Take user tablespaces read only. So output of this select:

    SELECT distinct 'alter tablespace '|| tablespace_name||' read only;' 
    from dba_data_files
    where tablespace_name not in ('SYSTEM','SYSAUX','UNDOTBS01','UNDOTBS02');

Please indicate your UNDO tablesapce name instead of mine.

4. Disable any custom DDL trigger , if exists. I have had created DDL trigger, so disabling it.

           alter trigger sys.audit_ddl_trg disable;

5. Purge recyclebin and gather dictionary statistics

           purge dba_recyclebin;
           exec dbms_stats.gather_dictionary_stats;

6. Shutdown database with clean option! or you will not be able to restore after upgrade fails.

           sqlplus / as sysdba
           alter database checkpoint;
           shutdown immediate;
           lsnrctl stop

7.  cold copy SYSTEM, SYSAUX, control files

            cp /ud01/oradata/ORCL/sysaux01.dbf /ud02/backup_mk/
            cp /ud01/oradata/ORCL/system01.dbf /ud02/backup_mk/
            cp /ud01/oradata/ORCL/control01.ctl /ud02/backup_mk/
            cp /ud01/oradata/ORCL/control02.ctl /ud02/backup_mk/

======================Upgrade

8.  Start DB in upgrade mode from target ORACLE_HOME

           export ORACLE_HOME=/u01/app/oracle/product/12.2.0/db
           export PATH=/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/oracle/bin:/u01/app/oracle/product/12.2.0/db/bin:/u01/app/oracle/product/12.2.0/db

           sqlplus / as sysdba
           startup upgrade;
           exit

           cd /u01/app/oracle/product/12.2.0/db/bin
           ./dbupgrade -n 33 -T
           

-T option tells upgrade that schema tablespaces need to be offline(we already done it, just indicate -T)

So, here at phase 109 it fails with unexpected error , mentioned previously.

You must see all the logs that the upgrade creates.. the main error was written in catupgrd0.log

              catrequtlmg: b_StatEvt = TRUE 
              catrequtlmg: b_SelProps = FALSE 
              catrequtlmg: b_UpgradeMode = TRUE 
              catrequtlmg: b_InUtlMig = TRUE 
              catrequtlmg: Deleting table stats 
              catrequtlmg: Gathering Table Stats OBJ$MIG 
              catrequtlmg: Gathering Table Stats USER$MIG 
              declare 
              * 
              ERROR at line 1: 
              ORA-01422: exact fetch returns more than requested number of rows 
              ORA-06512: at "SYS.DBMS_STATS", line 36873 
              ORA-06512: at "SYS.DBMS_STATS", line 36507 
              ORA-06512: at "SYS.DBMS_STATS", line 35428 
              ORA-06512: at "SYS.DBMS_STATS", line 34760 
              ORA-06512: at "SYS.DBMS_STATS_INTERNAL", line 22496 
              ORA-06512: at "SYS.DBMS_STATS_INTERNAL", line 22483 
              ORA-06512: at "SYS.DBMS_STATS", line 34416 
              ORA-06512: at "SYS.DBMS_STATS", line 35168 
              ORA-06512: at "SYS.DBMS_STATS", line 36230 
              ORA-06512: at "SYS.DBMS_STATS", line 36716 
              ORA-06512: at line 149

As you see it was gathering statistics on USER$MIG table and it got ORA-01422 🙂
If you try to gather statistics manually you will also get the same error:

         EXEC sys.dbms_stats.gather_table_stats('SYS', 'USER$MIG',method_opt=>'FOR ALL COLUMNS SIZE SKEWONLY');

If you check entries in obj$ table, you will see that there are duplicate entries:

          SQL> select to_char(OBJ#), name,  ctime, STATUS 
               from obj$ 
               where name='USER$MIG'; 

          TO_CHAR(OBJ#) NAME CTIME 
          ----------------- ------------ -------------------- 
          956329 USER$MIG 08-10-2015 12:59:55 
          956328 USER$MIG 08-10-2015 12:59:55 
          956325 USER$MIG 08-10-2015 12:59:53 
          956327 USER$MIG 08-10-2015 12:59:55 
          956326 USER$MIG 08-10-2015 12:59:55 
          1621545 USER$MIG 19-05-2017 22:55:26

There are six entries. Five of them is created in 2015 (It maybe left from previous upgrade in 2015 that also failed for the first time)

Oracle support did not advice to delete old entries from this table, BUT support sometimes is                       wrong. We had no time , we were upgrading database 4 hours , for the first time it was failed, we                 had backup so we risked and deleted 2015 entries from this table.

          delete from obj$ 
          where OBJ# in (956329, 
          956328, 
          956325, 
          956327, 
          956326); 
          commit;

Rerun upgrade from phase 109

          ./dbupgrade -p 109 -T -n 40

And it completed successfully (happy) , so our risk was right !!!!

################Post Upgrade

           SQL> @?/rdbms/admin/postupgrade_fixups.sql

It will fix up some steps that can be done automatically but  it may also write steps that must be                   done manually.  One of them is upgrading timestamp, which I have described in my previous                       post. “Upgrading timezone manually in 12c”

Change compatible variable in your spfile

           From 
           *.compatible='12.1.0.2.0'
           To
           *.compatible='12.2.0.1.0'

Change home to new oracle home in /etc/oratab and in listener.ora.

Recompile invalid objects

           SQL> @?/rdbms/admin/utlrp.sql
           select count(*) from dba_objects where status!='VALID';

Start listener from new home:

           export ORACLE_HOME=/u01/app/oracle/product/12.2.0/db/bin
           export PATH=/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/oracle/bin:/u01/app/oracle/product/12.2.0/db/bin
           lsnrctl start

Enable trigger if you have disabled before

           alter trigger sys.audit_ddl_trg enable;

Make schema tablespaces read write:

           SELECT distinct 'alter tablespace '|| tablespace_name||' read write;' 
           from dba_data_files
           where tablespace_name not in ('SYSTEM','SYSAUX','UNDOTBS01','UNDOTBS02');
Advertisement

About Mariami Kupatadze
Oracle Certified Master Linkedin: https://www.linkedin.com/in/mariami-kupatadze-01074722/

One Response to Upgrade 12.1.0.2 to 12.2.0.1 fails at 109 phase with “Unexpected error encountered”

  1. ADEEP MOHAIDEEN RAHUMATHULLAH SHAHIB says:

    We faced the same issue while upgrading from 12.1 to 12.2 and ended up to rollback to 12.1 . Please let us know root cause for the same

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: