Search This Blog

Monday, February 22, 2021



Adding Pluggable Databases to an existing Multitenant Data Guard environment

----------------------------------------------------------------------------------------------------

1. Check the source database information.


set lines 200
col DATABASE_HOST for a60;
col HOST_NAME for a15;
col DATABASE_ROLE for a10
col OPEN_MODE for a10
col STARTUP_TIME for a20
SELECT i.HOST_NAME "DATABASE_HOST" ,i.INSTANCE_NAME "DB_NAME", d.DATABASE_ROLE " DATABASE_ROLE", d.OPEN_MODE " OPEN_MODE ", STARTUP_TIMEfrom GV$DATABASE d, gv$instance i where i.INST_ID=d.INST_ID;


DATABASE_HOST                  DB_NAME           DATABASE_ROLE    OPEN_MODE           STARTUP_TIME

------------------------------ ---------------- ---------------- -------------------- --------------------

virtual-19crac1.localdomain     buzz18c1         PRIMARY          READ WRITE           04-FEB-21
virtual-19crac2.localdomain     buzz18c2         PRIMARY          READ WRITE           04-FEB-21



2. Notedown the GUID of the source databsae incase if we need it 


SQL> select guid from v$containers;

GUID
--------------------------------
BA3968236D0B4A71E0535018A8C08092



3. Shutdown the non-CDB primary cleanly and then restart only one instance in READ ONLY mode

[oracle@virtual-19crac1 ~]$ srvctl stop  database -d buzz18c

[oracle@virtual-19crac1 ~]$ srvctl status database -d buzz18c
Instance buzz18c1 is not running on node virtual-19crac1
Instance buzz18c2 is not running on node virtual-19crac2

[oracle@virtual-19crac1 ~]$ srvctl start instance -d buzz18c -n virtual-19crac1 -o "READ ONLY"

[oracle@virtual-19crac1 ~]$ srvctl status database -d buzz18c
Instance buzz18c1 is running on node virtual-19crac1
Instance buzz18c2 is not running on node virtual-19crac2

[oracle@virtual-19crac1 ~]$sqlplus / as sysdba


set lines 200

col DATABASE_HOST for a30;

col HOST_NAME for a15;

col DATABASE_ROLE for a10

col OPEN_MODE for a10

col STARTUP_TIME for a20

SELECT i.HOST_NAME "DATABASE_HOST" ,i.INSTANCE_NAME "DB_NAME", d.DATABASE_ROLE " DATABASE_ROLE", d.OPEN_MODE " OPEN_MODE ", STARTUP_TIME

from GV$DATABASE d, gv$instance i where i.INST_ID=d.INST_ID;

DATABASE_HOST                  DB_NAME           DATABASE_ROLE    OPEN_MODE           STARTUP_TIME

----------------------------- ---------------- ---------------- -------------------- --------------------

virtual-19crac1.localdomain     buzz18c1         PRIMARY          READ ONLY            04-FEB-21



4. In the source non-cdb database, creat a manifest file to plug in.


mkidr -p /u01/software/migration

exec dbms_pdb.describe('/u01/software/migration/buzz18c.xml');

Output:

----------

SQL> exec dbms_pdb.describe('/u01/software/migration/buzz18c.xml');

PL/SQL procedure successfully completed


5. Shutdown the source non-cdb database


[oracle@virtual-19crac1 ~]$ srvctl stop  database -d buzz18c

[oracle@virtual-19crac1 ~]$ srvctl status database -d buzz18c
Instance buzz18c1 is not running on node virtual-19crac1
Instance buzz18c2 is not running on node virtual-19crac2

6.  Create the PDB in the target CDB database as below using "Standbys=NONE" options. 


SQ> create pluggable database buzz18c USING '/u01/software/migration/buzz18c.xml' tempfile reuse NOCOPY  standbys=NONE;

Output:
-------------

SQL> create pluggable database buzz18c USING '/u01/software/migration/buzz18c.xml' tempfile reuse NOCOPY  standbys=NONE;

Pluggable database created.


7.  Check the status of the PDB 


set lines 200
col open_mode for a15
col name for a10
SELECT name,open_mode,recovery_status FROM v$pdbs;


Output

---------

NAME       OPEN_MODE       RECOVERY

---------- --------------- --------

PDB$SEED   MOUNTED         ENABLED

TESTPDB    READ WRITE      ENABLED

TEST121    READ WRITE      ENABLED

WALLE121   READ WRITE      ENABLED

BUZZ18C    MOUNTED         ENABLED



8. Open the database in upgrade mode and check the status again.


ALTER pluggable DATABASE BUZZ18C   OPEN upgrade;


output

------------------
SQL> ALTER pluggable DATABASE BUZZ18C   OPEN upgrade;

Pluggable database altered.


SQL> SELECT name,open_mode,recovery_status FROM v$pdbs;


NAME       OPEN_MODE       RECOVERY

---------- --------------- --------

PDB$SEED   MOUNTED         ENABLED

TESTPDB    READ WRITE      ENABLED

TEST121    READ WRITE      ENABLED

WALLE121   READ WRITE      ENABLED

BUZZ18C    MIGRATE         ENABLED

9.  AS you can see that the PDB was plugged and it is in MIGRATE mode in Primary CDB database. Lets check its status in CDB Standby database 

set lines 200

col DATABASE_HOST for a30;

col HOST_NAME for a15;

col DATABASE_ROLE for a10

col OPEN_MODE for a10

col STARTUP_TIME for a20

SELECT i.HOST_NAME "DATABASE_HOST" ,i.INSTANCE_NAME "DB_NAME", d.DATABASE_ROLE " DATABASE_ROLE", d.OPEN_MODE " OPEN_MODE ", STARTUP_TIME

from GV$DATABASE d, gv$instance i where i.INST_ID=d.INST_ID;


DATABASE_HOST                  DB_NAME           DATABASE_ROLE    OPEN_MODE           STARTUP_TIME

------------------------------ ---------------- ---------------- -------------------- --------------------

chopstdby-19crac1.localdomain  test19cstdby1    PHYSICAL STANDBY MOUNTED              03-FEB-21

chopstdby-19crac2.localdomain  test19cstdby2    PHYSICAL STANDBY MOUNTED              03-FEB-21


SQL>  set lines 200

        col open_mode for a15

        col name for a10

        SELECT name,open_mode,recovery_status FROM v$pdbs;


NAME       OPEN_MODE       RECOVERY

---------- --------------- --------

PDB$SEED   MOUNTED         ENABLED

TESTPDB    MOUNTED         ENABLED

TEST121    MOUNTED         ENABLED

WALLE121   MOUNTED         ENABLED

BUZZ18C    MOUNTED         DISABLED


Note: As you can see that in standby database of CDB , the status of newly created PDB is MOUNT and the RECOVERY status is DISABLED. 

If You check the alert log of standby database, you can see that the PDB datafiles are stored as 'UNAMED' . Check below.


output

------------------

Resize operation completed for file# 3, old size 1249280K, new size 1259520K

2021-02-04T10:08:34.296502-06:00

MRP0 (PID:1903): Media Recovery Log +DATA/TEST19CSTDBY/ARCHIVELOG/2021_02_04/thread_1_seq_255.1633.1063616535

Recovery created pluggable database BUZZ18C

BUZZ18C(6):File #37 added to control file as 'UNNAMED00037'. Originally created as:

BUZZ18C(6):'+DATA/BUZZ18C/DATAFILE/system.1969.1063291795'

BUZZ18C(6):because the pluggable database was created with nostandby

BUZZ18C(6):or the tablespace belonging to the pluggable database is

BUZZ18C(6):offline.

BUZZ18C(6):File #38 added to control file as 'UNNAMED00038'. Originally created as:

BUZZ18C(6):'+DATA/BUZZ18C/DATAFILE/sysaux.1762.1063291831'

BUZZ18C(6):because the pluggable database was created with nostandby

BUZZ18C(6):or the tablespace belonging to the pluggable database is

BUZZ18C(6):offline.

BUZZ18C(6):File #39 added to control file as 'UNNAMED00039'. Originally created as:

BUZZ18C(6):'+DATA/BUZZ18C/DATAFILE/undotbs1.1756.1063291857'

BUZZ18C(6):because the pluggable database was created with nostandby

BUZZ18C(6):or the tablespace belonging to the pluggable database is

BUZZ18C(6):offline.


10) At this stage, lets focus on PRIMARY PDB database . Since the PDB database has been plugged , lets continue with the steps to convert the non-CDB into a PDB by 

connecting to the PDB and running $ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql.



 10a) Connect to Primary CDB database


SQL> set lines 200

        col DATABASE_HOST for a30;   
    col HOST_NAME for a15;
        col DATABASE_ROLE for a10
        col OPEN_MODE for a10
        col STARTUP_TIME for a20

        SELECT i.HOST_NAME "DATABASE_HOST" ,i.INSTANCSQL> SQL> SQL> SQL> SQL> SQL> E_NAME "DB_NAME", d.DATABASE_ROLE " DATABASE_ROLE", d.OPEN_MODE " OPEN_MODE ", STARTUP_TIME

        from GV$DATABASE d, gv$instance i where i.INST_ID=d.INST_ID;  2


DATABASE_HOST                  DB_NAME           DATABASE_ROLE    OPEN_MODE           STARTUP_TIME

------------------------------ ---------------- ---------------- -------------------- --------------------

virtual-19crac1.localdomain     test19c1         PRIMARY          READ WRITE           03-FEB-21

virtual-19crac2.localdomain     test19c2         PRIMARY          READ WRITE           03-FEB-21


 10b) Connect to PDB database as belowS


QL> show pdbs


    CON_ID CON_NAME                       OPEN MODE  RESTRICTED

---------- ------------------------------ ---------- ----------

         2 PDB$SEED                       MOUNTED

         3 TESTPDB                        READ WRITE NO

         4 TEST121                        READ WRITE NO

         5 WALLE121                       READ WRITE NO

         6 BUZZ18C                        MIGRATE    YES


SQL> alter session set container=BUZZ18C;
Session altered.

 10c) Run the below script to convert non-cdb to pdb.

SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql


Generally this script wont fail, but in my case, I had one parameter 'STRING_MAX_SIZE' set in CDB level and because of this parameter my command failed at the last step. Below is the error message.



Last few lines from above command

---------------------------------

10:19:47 SQL> alter pluggable database "&pdbname" open restricted;

alter pluggable database "BUZZ18C" open restricted

*

ERROR at line 1:

ORA-14696: MAX_STRING_SIZE migration is incomplete for pluggable database

BUZZ18C

Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

Version 19.9.0.0.0

[oracle@virtual-19crac1 migration]$ sqlplus / as sysdba


   PLEASE FOLLOW below sub steps ONLY if you got the above error. If you didnt get the above error, you can go the step 11.

So I have start the CDB primary database in upgrade mode and then open this PDB in upgrade mode and run the below script.


10c1)  Connect to primary database CDB and set cluster_database to false

SQL > alter system set cluster_database=false scope=spfile sid='*';

10c2) start the CDB primary database in upgrade mode

SQL> startup upgrade


10c) Open the new PDB in upgrade mode

SQL>  alter pluggable database BUZZ18C open upgrade;


10c) Connect to the new PDB and run below script related to MAX_STRING_SIZE

SQL> alter session set container=BUZZ18C;

SQL> show con_name

CON_NAME

------------------------------

BUZZ18C

@?/rdbms/admin/utl32k.sql

10d) set cluster_database back to TRUE and shutdown the CDB primary

SQL> alter system set cluster_database=true scope=spfile sid='*';

SQL> shut immediate;

10e) Start the CDB primary database using srvctl 

[oracle@virtual-19crac1 migration]$ srvctl start database -d test19c
[oracle@virtual-19crac1 migration]$ srvctl status database -d test19c

Instance test19c1 is running on node virtual-19crac1
Instance test19c2 is running on node virtual-19crac2



10d) open the PDB in upgrade and re-run the noncdb_to_pdb scrip


SQL> ALTER pluggable DATABASE BUZZ18C   OPEN upgrade;

Pluggable database altered.


SQL> SELECT name,open_mode,recovery_status FROM v$pdbs;


NAME       OPEN_MODE       RECOVERY

---------- --------------- --------

PDB$SEED   MOUNTED         ENABLED

TESTPDB    READ WRITE      ENABLED

TEST121    READ WRITE      ENABLED

WALLE121   READ WRITE      ENABLED

BUZZ18C    MIGRATE         ENABLED


SQ>@$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql


11) Connect the to Primary CDB and open the new PDB database.SQL> show pdbs


    CON_ID CON_NAME                       OPEN MODE  RESTRICTED

---------- ------------------------------ ---------- ----------

          2 PDB$SEED                       MOUNTED

          3 TESTPDB                        READ WRITE NO

          4 TEST121                        READ WRITE NO

          5 WALLE121                       READ WRITE NO

          6 BUZZ18C                        MOUNTED

SQL> alter pluggable database BUZZ18c OPEN;
Pluggable database altered.


SQL>show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED

---------- ------------------------------ ---------- ----------

         2 PDB$SEED                       MOUNTED

         3 TESTPDB                        READ WRITE NO

         4 TEST121                        READ WRITE NO

         5 WALLE121                       READ WRITE NO

         6 BUZZ18C                        READ WRITE NO



11) In the Standby Container Restore PDB from Primary as below.



run

{

set newname for pluggable database BUZZ18C to new;

restore pluggable database BUZZ18C from service test19c;

}

Output

---------------------------------------------------------------------


[oracle@chopstdby-19crac1 ~]$ rman target /


Recovery Manager: Release 19.0.0.0.0 - Production on Thu Feb 4 11:11:50 2021

Version 19.9.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.


connected to target database: TEST19C (DBID=2437448537, not open)


RMAN> run

{

set newname for pluggable database BUZZ18C to new;

restore pluggable database BUZZ18C from service test19c;

}

2> 3> 4> 5>

executing command: SET NEWNAME


Starting restore at 04-FEB-21

using target database control file instead of recovery catalog

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=71 instance=test19cstdby1 device type=DISK


channel ORA_DISK_1: starting datafile backup set restore

channel ORA_DISK_1: using network backup set from service test19c

channel ORA_DISK_1: specifying datafile(s) to restore from backup set

channel ORA_DISK_1: restoring datafile 00037 to +DATA

channel ORA_DISK_1: restore complete, elapsed time: 00:00:15

channel ORA_DISK_1: starting datafile backup set restore

channel ORA_DISK_1: using network backup set from service test19c

channel ORA_DISK_1: specifying datafile(s) to restore from backup set

channel ORA_DISK_1: restoring datafile 00038 to +DATA

channel ORA_DISK_1: restore complete, elapsed time: 00:00:07

channel ORA_DISK_1: starting datafile backup set restore

channel ORA_DISK_1: using network backup set from service test19c

channel ORA_DISK_1: specifying datafile(s) to restore from backup set

channel ORA_DISK_1: restoring datafile 00039 to +DATA

channel ORA_DISK_1: restore complete, elapsed time: 00:00:15

channel ORA_DISK_1: starting datafile backup set restore

channel ORA_DISK_1: using network backup set from service test19c

channel ORA_DISK_1: specifying datafile(s) to restore from backup set

channel ORA_DISK_1: restoring datafile 00040 to +DATA

channel ORA_DISK_1: restore complete, elapsed time: 00:00:01

channel ORA_DISK_1: starting datafile backup set restore

channel ORA_DISK_1: using network backup set from service test19c

channel ORA_DISK_1: specifying datafile(s) to restore from backup set

channel ORA_DISK_1: restoring datafile 00041 to +DATA

channel ORA_DISK_1: restore complete, elapsed time: 00:00:02

Finished restore at 04-FEB-21

RMAN>

---------------------------------------------------------------------



12)  Stop the log apply process on the standby database as below.



DGMGRL> edit database test19cstdby set state='APPLY-OFF';

Succeeded.


DGMGRL> show database test19cstdby


Database - test19cstdby


  Role:               PHYSICAL STANDBY

  Intended State:     APPLY-OFF

  Transport Lag:      23 minutes 48 seconds (computed 8 seconds ago)

  Apply Lag:          23 minutes 47 seconds (computed 8 seconds ago)

  Average Apply Rate: (unknown)

  Real Time Query:    OFF

  Instance(s):

    test19cstdby1 (apply instance)

    test19cstdby2


  Database Warning(s):

    ORA-16855: transport lag has exceeded specified threshold


Database Status:

WARNING


13) Since the replication stopped now, lets switch the standby pdb datafiles as below.


RMAN> switch pluggable database BUZZ18C to copy;


Output

------------------------------------------------------------------------------


[oracle@chopstdby-19crac1 ~]$ rman target /


Recovery Manager: Release 19.0.0.0.0 - Production on Thu Feb 4 11:18:34 2021

Version 19.9.0.0.0


Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.


connected to target database: TEST19C (DBID=2437448537, not open)


RMAN> switch pluggable database BUZZ18C to copy;


using target database control file instead of recovery catalog

datafile 37 switched to datafile copy "+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/system.1316.1063624317"

datafile 38 switched to datafile copy "+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/sysaux.1312.1063624331"

datafile 39 switched to datafile copy "+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/undotbs1.1313.1063624339"

datafile 40 switched to datafile copy "+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/users.1306.1063624353"

datafile 41 switched to datafile copy "+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/undotbs2.1304.1063624355"


RMAN>



14) Enable the PDB recovery on the Standby Container


SQL> ALTER SESSION SET container=BUZZ18C;

Session altered.


SQL> ALTER pluggable DATABASE enable recovery;

Pluggable database altered.



15)  Connect to PDB database in Standby CDB and create a dynamic script to move the datafiles to ONLINE.



 15a) SQL> ALTER SESSION SET container=BUZZ18C;

Session altered.

SQL> set lines 200
scol name for a99
select name,status from v$datafile
select 'ALTER DATABASE datafile ''' || name || ''' online ;' from v$datafile;

Output

-------------------------------------------------------------------------------------------------


NAME                                                                                                STATUS

--------------------------------------------------------------------------------------------------- ---------------

+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/system.1316.1063624317                 RECOVER

+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/sysaux.1312.1063624331                 RECOVER

+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/undotbs1.1313.1063624339               RECOVER

+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/users.1306.1063624353                  RECOVER

+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/undotbs2.1304.1063624355               RECOVER


ALTER DATABASE datafile '+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/system.1316.1063624317' online ;

ALTER DATABASE datafile '+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/sysaux.1312.1063624331' online ;

ALTER DATABASE datafile '+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/undotbs1.1313.1063624339' online ;

ALTER DATABASE datafile '+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/users.1306.1063624353' online ;

ALTER DATABASE datafile '+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/undotbs2.1304.1063624355' online ;



15b) Now run the above ONLINE commands in PDB.


SQL> ALTER DATABASE datafile '+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/system.1316.1063624317' online ;

ALTER DATABASE datafile '+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/sysaux.1312.1063624331' online ;

        ALTER DATABASE datafile '+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/undotbs1.1313.1063624339' online ;

        ALTER DATABASE datafile '+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/users.1306.1063624353' online ;

        ALTER DATABASE datafile '+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/undotbs2.1304.1063624355' online ;


Database altered.


SQL>

Database altered.


SQL>

Database altered.


SQL>

Database altered.


SQL>

Database altered.


16) Now connect to the Standby container and restart the Apply Process as below.

DGMGRL>  edit database test19cstdby set state='APPLY-ON';

Succeeded.


DGMGRL> show database test19cstdby;

Database - test19cstdby


  Role:               PHYSICAL STANDBY

  Intended State:     APPLY-ON

  Transport Lag:      35 minutes 53 seconds (computed 24 seconds ago)

  Apply Lag:          35 minutes 52 seconds (computed 24 seconds ago)

  Average Apply Rate: 23.00 KByte/s

  Real Time Query:    OFF

  Instance(s):

    test19cstdby1 (apply instance)

    test19cstdby2


  Database Warning(s):

    ORA-16853: apply lag has exceeded specified threshold

    ORA-16855: transport lag has exceeded specified threshold


Database Status:

WARNING



17 ) Now stop the standby CDB database and start it in READ ONLY mode  and check if its receiving the logs and if its in SYNC with primary database.



 




OCT 2020 OJVM patching in Multitenant RAC database.


1.  Stop the instance on node 1

srvctl stop home -o $ORACLE_HOME -s /u01/software/virtual-19crac1_19c_home_services.txt -n virtual-19crac1

2. Run the conflict check on node 1

export ORACLE_HOME=/u01/app/oracle/product/19c/db_1
export PATH=$ORACLE_HOME/bin:$PATH

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/31668882

3. Apply the patch

cd /u01/software/31668882

export ORACLE_HOME=/u01/app/oracle/product/19c/db_1
export PATH=$ORACLE_HOME/bin:$PATH
$ORACLE_HOME/OPatch/opatch apply

4 start the oracle services 

srvctl start home -o $ORACLE_HOME -s /u01/software/virtual-19crac1_19c_home_services.txt -n virtual-19crac1


5) check the patch

$ORACLE_HOME/OPatch/opatch lspatches

----------------------------------------------------------------------------------------------------------------------

 Patching on second node:
_______________________

1. stop the instance on node 2

srvctl stop home -o $ORACLE_HOME -s /u01/software/vm19rac2_19c_home_services.txt -n vm19rac2

2. Run the conflict check on node 2

export ORACLE_HOME=/u01/app/oracle/product/19c/db_1
export PATH=$ORACLE_HOME/bin:$PATH


$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/31668882

3. Apply the patch on node 2

cd /u01/software/31668882
export ORACLE_HOME=/u01/app/oracle/product/19c/db_1
export PATH=$ORACLE_HOME/bin:$PATH
$ORACLE_HOME/OPatch/opatch apply


4 start the oracle services on node 2

srvctl start home -o $ORACLE_HOME -s /u01/software/vm19rac2_19c_home_services.txt -n vm19rac2


5) check the patch

$ORACLE_HOME/OPatch/opatch lspatches


-------------------------------------------------------------------------------------------------------------------------

POST Patch -

--------------------------------------------

On node 1.

1. Set the cluster_database to FALSE in the 19c database.


sqlplus /nolog

SQL> CONNECT / AS SYSDBA
SQL> STARTUP
SQL> alter system set cluster_database=false scope=spfile;

2.  Stop the database using srvctl and start only the first instance as below.


$ORACLE_HOME/bin/srvctl stop database -d <dbname>

sqlplus /nolog

SQL> CONNECT / AS SYSDBA
SQL> STARTUP UPGRADE;

3. Start the pdbs in uprade mode as below.

show pdbs
alter pluggable database all open upgrade;
show pdbs

Make sure that all the PDBS are un upgrade mode

Sample output
-----------------------------

SQL> show pdbs

CON_ID CON_NAME                       OPEN MODE  RESTRICTED
--------- ------------------------------ ---------- ----------
2 PDB$SEED                       MIGRATE    YES
3 TESTPDB                          MIGRATE    YES
4 TEST121                            MIGRATE    YES
5 WALLE121                        MIGRATE    YES
6 BUZZ18C                          MIGRATE    YES


4. Run the datapatch 


export ORACLE_HOME=/u01/app/oracle/product/19c/db_1
export PATH=$ORACLE_HOME/bin:$PATH
cd $ORACLE_HOME/OPatch
./datapatch -verbose 

5. Now set the cluster_database to true and shutdown the database 


To start the database back in normal mode

sqlplus /nolog

SQL> CONNECT / AS SYSDBA
SQL> alter system set cluster_database=true scope=spfile;
SQL> SHUTDOWN immediate;

6. Start the database back in normal mode.


$ORACLE_HOME/bin/srvctl start database -d <dbname>


7. Check the registry

set lin 1000
col action form a12
col version  form a40
col description form a85
col action_date form a20
select description, action, to_char(action_time,'DD/MM/RR HH24:MI:SS') action_date, ' ' versionfrom dba_registry_sqlpatch;

6. run the utlp script to compile the invalid objects

cd $ORACLE_HOME/rdbms/admin

sqlplus /nolog

SQL> CONNECT / AS SYSDBA

SQL> @utlrp.sql


------------------------------------------------------END -------------------------------------------------------