Search This Blog

Friday, December 16, 2022

What is Data Pump Bundle Patch in Oracle and how do you apply it to RAC environment?














What is Oracle Data Pump Bundle Patch?

Data Pump Bundle Patches are the Proactive patches for Datapump. Oracle generally doesn't include the Bug fixes for Datapump regular Database Bundle Release updates but are instead, Oracle is releasing the Data pump bug fixes separately as Datapump proactive patches. These patch bundles include performance and stability fixes for Datapump. The patches are only available on DB versions 19.10 or higher.

One major benefit of these patches is that the application of 

data pump bundle patches will see a significant performance improvement for the datapump.

We can refer the below document for downloading the latest Data pump Bundle patches.

 Data Pump Recommended Proactive Patches For 19.10 and Above (Doc ID 2819284.1)










Lets see how we can apply the Data Pump Bundle Patch in our 19c RAC environment.

Source Environment setup :

RAC Database : LABDBPR ( labdbpr1 and labdbpr2) 
GRID Home : /u01/app/19.3.0.0/grid
RDBMS Home    : /u01/app/oracle/product/19.3.0.0/dbhome_1
Version         : Oracle Database 19c EE - Production Version 19.17.0.0.0
hosts : oraprim01 & oraprim02

[oracle@oraprim01 software]$ $ORACLE_HOME/OPatch/opatch lspatches | grep "Database Release Update" 
34419443;Database Release Update : 19.17.0.0.221018 (34419443)
[oracle@oraprim01 software]$


1) Download and unzip the patch (p34734035_1917000DBRU_Generic.zip)  in staging area on all the nodes of the cluster.

Node 1:

[oracle@oraprim01 software]$ ls -ltr

total 2304
drwxr-xr-x. 4 oracle oinstall      48 Nov 21 02:52 34734035
-rw-rw-r--. 1 oracle oinstall   26780 Dec  7 06:11 PatchSearch.xml
-rwxr-x---. 1 oracle dba      2328024 Dec 16 07:47 p34734035_1917000DBRU_Generic.zip
[oracle@oraprim01 software]$

Node 2:

[oracle@oraprim02 software]$ ls -ltr
total 2304
drwxr-xr-x. 4 oracle oinstall      48 Nov 21 02:52 34734035
-rw-rw-r--. 1 oracle oinstall   26780 Dec  7 06:11 PatchSearch.xml
-rwxr-x---. 1 oracle oinstall 2328024 Dec 16 07:47 p34734035_1917000DBRU_Generic.zip
[oracle@oraprim02 software]$


2) ensure that that below perquisites are met.

2a) Ensure that 19 Release 19.17.0.0.221018DBRU Patch Set Update (PSU) 34419443 is already applied on the Oracle Database.


2b) Ensure that the Oracle home on which you are installing the patch or from which you are rolling back the patch is Oracle Database 19 Release 19.17.0.0.221018DBRU.

2c) Ensure that you have OPatch 19 Release 12.2.0.1.32 or higher. Oracle recommends that you use the latest version available for 19 Release 12.2.0.1.32.

2d) Determine whether any currently installed interim patches conflict with this patch 34734035 as shown below:

lets check for the confict in our environments.

export ORACLE_HOME=/u01/app/oracle/product/19.3.0.0/dbhome_1
export PATH=$ORACLE_HOME/bin:$PATH
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -ph ./ 

Node1:










Node2:





















we can see that there no patches that are conflicting with this patch (34734035 ) that we are going to apply.

3) Apply the patch.

For a RAC environment, shut down all the services (database, ASM, listeners, nodeapps, and CRS daemons) running 
from the Oracle home of the node you want to patch. 
After you patch this node, start the services on this node.

we can stop the database instance on node1 using srvctl stop instance command.

[oracle@oraprim01 34734035]$ srvctl status database -d labdbpr
Instance labdbpr1 is running on node oraprim01
Instance labdbpr2 is running on node oraprim02

 [oracle@oraprim01 34734035]$ srvctl stop instance -d labdbpr -n oraprim01
  
[oracle@oraprim01 34734035]$ srvctl status database -d labdbpr
  Instance labdbpr1 is not running on node oraprim01
  Instance labdbpr2 is running on node oraprim02
  [oracle@oraprim01 34734035]$
Apply the patch on this node

















Start the database on this node
[oracle@oraprim01 34734035]$ srvctl status database -d labdbpr
Instance labdbpr1 is not running on node oraprim01
Instance labdbpr2 is running on node oraprim02
  [oracle@oraprim01 34734035]$ srvctl start instance -d labdbpr -n oraprim01
  [oracle@oraprim01 34734035]$ srvctl status database -d labdbpr
  Instance labdbpr1 is running on node oraprim01
  Instance labdbpr2 is running on node oraprim02
  [oracle@oraprim01 34734035]$
4) Stop the database instance on node 2 and apply the patch.



































Start the database instance on this node

[oracle@oraprim02 34734035]$ srvctl start instance -d labdbpr -n oraprim02
[oracle@oraprim02 34734035]$ srvctl status database -d labdbpr
Instance labdbpr1 is running on node oraprim01
Instance labdbpr2 is running on node oraprim02
[oracle@oraprim02 34734035]$

5) Post Patch Installation steps.

5a) run the datapatch 

Since this is a RAC database, we just need to run the datapatch only once from one node.in our case, we will run the datapatch from node 1 .



5b) un the utlrp.sql to clear the invalids only if dpload.sql is a part of the patch
@?/rdbms/admin/utlrp.sql




















































We have successfully applied the DATAPUMP BUNDLE PATCH 19.17.0.0.0. 

6) Rollback step. If we want to rollback the patch, we just have to follow below 3 steps.

6a) stop the database instance on the node where we are rolling back the patch.

6b) roll back the patch using below command
      export ORACLE_HOME=/u01/app/oracle/product/19.3.0.0/dbhome_1
      export PATH=$ORACLE_HOME/bin:$PATH
     $ORACLE_HOME/OPatch/opatch rollback -id 34734035

6c) start the database/services from the node


Info: Bugs Fixed by This Patch

The following are the bugs fixed by this patch:
  24338134: DBMS_METADATA.GET_DDL GENERATES INCORRECT DDL FOR TABLE WITH USING INDEX CLAUSE
  25143018: TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y NOT HONORED  LOB SEGMENTS
  26565187: DATA PUMP EXPORT PERFORMANCE IS MUCH SLOWER IN 12.2
  26995275: MISSING ALTER ANY SEQUENCE GRANT IN THE DATAPUMP_IMP_FULL_DATABASE ROLE
  27679005: CARTESIAN PRODUCT IN INDEX COMPOSITE PARTITION DURING EXPORT.
  28318139: ORA-31003 ERROR WHEN IMPORTING FULL DATABASE IN PARALLEL
  28357349: SCHEMA LEVEL EXPORT/IMPORT CHANGES VIRTUAL COLUMN DEFINATION
  28485837: MOVE CHUNK: FAILS WITH ORA-00955 WHEN UNIQUE INDEX AND CONSTRAINT IS CREATED
  28555193: DBMS_METADATA.GET_DDL CAPTURE INCORRECT STORAGE OPTIONS OF THE XML COLUMN ON GTT
  28640918: SLOW PARSE ON SQL FOR TRANSPORTABLE TABLESPACES
  28771564: DATAPUMP EXPORT INVOKED BY A PRIVILEGE USER EXECUTES A QUERY FOR V$OPEN_CURSOR
  28990738: 12.2 DBMS_METADATA.GET_DDL IS SLOW DUE TO SLOW ACCESS ON DICTIONARY VIEWS
  29276889: ATP-D: DATA PUMP IMPORT FROM ATP-D INSTANCE TO A LOCAL DB INSTANCE FAILS
  29284458: ORA-1426 DBMS_METADATA REFERENCE PARTITIONS  HIGHER NUMBER
  29450761: CANT IMPORT INTO AN EXISTING TABLE WITH EXTRA COLUMNS WHEN ENCRYPTION USED
  29500928: IMPDP DIFFERENT BEHAVIOUR FOR IDENTITY COLUMN WHEN IMPORT ALL VERSUS METADATA_ONLY
  29543605: 18.4 ADWC - ORA-39242: UNABLE TO EXPORT/IMPORT "LONG RAW" DATA TYPE
  29613245: ORA-31684 ORA-39112 WITH FIX 28539085 AND VERSION=11.2
  29792959: IMPDP WITH TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y GENERATES ARCHIVES
  29959025: EXPDP RUNNING LONG TIME QUERYING KU$_SUBPARTITION_EST_VIEW WHEN PROCESSING TABLE_DATA
  30155338: POSSIBLE DEADLOCK/TIMEOUT ERRORS DURING PARALLEL IMPORT WITH TABLE_EXISTS_ACTION=REPLACE
  30157766: ORA-21560 DBMS_METADATA.FETCH_DDL  IN 19C NOT IN 12.2
  30191952: TTS FAILS WITH ORA-01422 ON V$ENCRYPTION_WALLET
  30430932: DBMS_METADATA NOT DISPLAYING THE SEMICOLON AND SLASH FOR TYPE SPECIFICATIONS
  30582819: REMAP TABLESPACE IS NOT CONSIDERED FOR LOCAL TEMPORARY TABLESPACE DURING IMPDP
  30662417: IMPDP WORKER TERMINATED WITH ORA-39029 AFTER MULTIPLE ORA-01775
  30763851: IMPDP 11.2 TO 18C OR HIGHER HITS ORA-904 WHEN TABLES HAVE EXTENDED STATISTICS
  30822078: IMPDP VERY SLOW DUE TO PROCESS REORDERING
  30858671: 18C DBMS_METADATA.GET_DDL FAILED WITH ORA-16000 IN READ ONLY MODE
  30928455: DATA PUMP EXPORT | HITTING ORA-31637 WHILE RUNNING DATA PUMP-DPLOAD CONCURRENCY TEST IN SAME PDB
  30944402: SELECT FROM MASTER TABLE RUNS SLOW DURING TABLE_DATA EXPORT WHEN THERE ARE MANY SUBPARTITIONS
  30978304: ORA-20000 DURING IMPDP WITH STATS AND THE UNIQUE INDEX FOR THE PK IS NOT CREATED
  31008542: DPUMP IMPORT FAILS WITH ORA-39083:TABLE:AUDSYS.AMGT$DP$AUD$UNIFIED
  31031133: SIMULTANEOUS DATA PUMP JOB CREATION FAILS DUE TO ATOMICITY WINDOW
  31050896: PARALLEL DATAPUMP SLOW ON CONSTRAINTS
  31174337: DBMS_METADATA.GET_DDL GENERATES NO KEYWORDS FOR NOT COMPRESSED INDEXES
  31191614: TTS EXPDP QUERIES V$ENCRYPTED_TABLESPACES FOR EVERY TBS SLOWING DOWN PERFORMANCE
  31200854: ADB-D: IMPORT PERFORMANCE OF PACKAGE_BODY
  31393386: SPIN-OFF OF BUG# 31317961 FOR PARTIAL BACKOUT OF BUG# 27403988 FROM MAIN LABEL
  31402031: DBMS_METADATA_UTIL THROWS AN INVALID CURSOR EXCEPTION.
  31412130: ADBD:: COMPLETE FIX FOR 29543605 WHICH INCLUDES ALL THE MISSING FILES
  31424070: APPSST19C: XTTS PDB - TABLE IMPORT/CREATION FAILED WITH ORA-39083 ORA-14334
  31711479: ADB-S: ORA39126 AND ORA01031 WHILE IMPORT USING FA FULL DUMP INTO ADB-S
  31725941: TOTAL ESTIMATION USING BLOCKS METHOD IS MISSING STARTING WITH 12.2
  31830685: ZDM :  IMPORT ADW-S DB LINK MIGRATION THROWS INTERNAL ERROR
  31976738: DATAPUMP IMP FAILURE- ORA-39844: BAD STREAM FORMAT DETECTED: [KLAPRS_15] [31] [60] [4] [1]
  32077706: ORA-60 FOR MASTER TABLE WHEN INHERITED COMPRESSION TABLESPACE SETTING
  32096059: IMPDP TO 19C USING EXPORT DUMP OF 11.2.0.4 HANGS WITH ENQ: TM - CONTENTION
  32206227: PDB IS IN MIGRATE STATUS AFTER DBCLI UPGRADE FROM 12.1.0.2 TO 19.8.0.0.0
  32370367: EXPDP IN 19.7 THREE TIMES SLOWER THAN IT WAS IN 11.2.0.4
  32452792: DBMS_METADATA.GET_DDL GETS WRONG OUTPUT FROM 12.2.0.1. TESTED TILL 19.3.0.0
  32512780: PROCOBJ PLSQL SCRIPTS ARE NOT EXCLUDED ON IMPORT WITH EXCLUDE=TAG
  32647307: ADB-D:: PACKAGE BODIES IMPORT SLOWER AFTER AUTONOMOUS REFRESH TO 19.10DBRU
  32731035: ATPD MIGRATION:ORA-04021: TIMEOUT OCCURRED WHILE WAITING TO LOCK OBJECT
  33163877: ATPD MIGRATION:IMPDP HITS TABLE OR VIEW DOES NOT EXIST ON SOME DATAPUMP RELATED TABLES
  33204663: TCH19C :: ORA-39139: DATA PUMP DOES NOT SUPPORT XMLTYPE OBJECTS WHEN DOING XTTS WITH BINARY XML STORAGE
  33297599: UNUSED XMLTYPE/CLOB COLUMNS CAUSE IMPORT FAILURE
  33346378: REWRITE DATA PUMP PATCH LOCKING TEST: TKDPATCHRAC.TSC
  33448450: TCH19C :: ORA-01647: TABLESPACE 'APPS_TS_TX_DATA' IS READ-ONLY, CANNOT ALLOCATE SPACE
  33470563: METADATA API FAILS TO RECOGNIZE TAB CHARACTER AS DELIMITER WHEN PARSING SOURCE LINES OF TYPE OBJECT
  33498804: DATAPUMP IMPORT IGNORES EXCLUDE AND INCLUDE VALUES FOR TAGS FOR IMPORT CALLOUTS
  33517865: Fix for Bug 33517865
  33526315: OCI FADEV: IMPORT ERROR - PLS-00201: IDENTIFIER 'SYS.DBMS_AQADM' MUST BE DECLARED
  33660169: CONSOLIDATED BUG FOR DATA PUMP AQ FIXES 31338354, 31844376, 31868443 FOR 19.10 AND LATER
  33720650: TCH19C :: OCI-21500: INTERNAL ERROR CODE [QMCXDGETQNAMEINFO2], [14003] IN XMLTYPE CLOUMN TYPE
  33731212: SELF DEADLOCK ON PACKAGE SYS.DBMS_TTS DURING 19.13 + ONEOFF'S DATAPATCH RUN ..
  33735435: TRACKING BUG FOR COMBO OF 32759991 32878145 32919937 32984678 (REPLACEMENT FOR MINI MLR 33407604)
  34052641: END_PLUGTS_BLK OBJECT TYPE MISSING FROM FULL TTS EXPORT WHEN INCLUDE SPECIFIED
  34174611: OCI FADEV: IMPDP CRASH AS AQ DEPENDENT TABLES DIDN'T GET EXPORTED IN TABLE MODE
  34457840: MISSING SYS.DBMS_AQ_IMP_INTERNAL.IMPORT_QUEUE_TABLE CALL FROM DATAPUMP IMPORT
  34525626: TRACKING BUG TO MERGE 33599275 AND 33498804 SO CAN BE BACKPORTED TOGETHER TO 19.16
  34600787: GET_DDL FAILS TO GENERATE TYPE DEFINITION WHEN NOT EVOLVED AND HAS TRAILING BLANK LINE
  34710181: BUG FIX FOR 34119289 ADDS XML STRUCTURE REF THAT DOESN'T EXIST IN 11.2 CAUSING TEST FAILURES


Thank you
Sambaiah Sammeta


Thursday, December 15, 2022

How to Convert a Non-CDB and add it as new PDB to a RAC CDB with standby database















we can use below steps to add a new PDB to an existing RAC CDB which has Physical Standby database configured.

1. Check the source database information.

DATABASE_HOST    DB_NAME    DATABASE_ROLE    OPEN_MODE   STARTUP_TIME
------------------------------ ---------------- ---------------- -------------------- --------------------
virtual-19crac1.localdomain     buzz18c1         PRIMARY          READ WRITE           04-FEB-21
virtual-19crac2.localdomain     buzz18c2         PRIMARY          READ WRITE           04-FEB-21

2. Note down the GUID of the source database incase if we need it check something in standby database as all the datafiles in standby will be created in a directory created with GUID

SQL> select guid from v$containers;
GUID
--------------------------------
BA3968236D0B4A71E0535018A8C08092

3. Shutdown the non-CDB primary cleanly and then restart only one instance in READ ONLY mode

[oracle@virtual-19crac1 ~]$ srvctl stop  database -d buzz18c
[oracle@virtual-19crac1 ~]$ srvctl status database -d buzz18c
Instance buzz18c1 is not running on node virtual-19crac1
Instance buzz18c2 is not running on node virtual-19crac2              

[oracle@virtual-19crac1 ~]$ srvctl start instance -d buzz18c -n virtual-19crac1 -o "READ ONLY"

[oracle@virtual-19crac1 ~]$ srvctl status database -d buzz18c
Instance buzz18c1 is running on node virtual-19crac1
Instance buzz18c2 is not running on node virtual-19crac2

[oracle@virtual-19crac1 ~]$sqlplus / as sysdba

DATABASE_HOST                  DB_NAME           DATABASE_ROLE    OPEN_MODE           STARTUP_TIME
------------------------------ ---------------- ---------------- -------------------- --------------------
virtual-19crac1.localdomain     buzz18c1         PRIMARY          READ ONLY            04-FEB-21

 4. In the source non-cdb database, creat a manifest file to plug in.

mkidr -p /u01/software/migration

SQL> exec dbms_pdb.describe('/u01/software/migration/buzz18c.xml');

PL/SQL procedure successfully completed

5. Shutdown the source non-cdb database

[oracle@virtual-19crac1 ~]$ srvctl stop  database -d buzz18c       
[oracle@virtual-19crac1 ~]$ srvctl status database -d buzz18c
Instance buzz18c1 is not running on node virtual-19crac1
Instance buzz18c2 is not running on node virtual-19crac2

6.  Create the PDB in the target CDB database as below using "Standbys=NONE" options. 

SQ> create pluggable database buzz18c USING '/u01/software/migration/buzz18c.xml' tempfile reuse NOCOPY  standbys=NONE;

Pluggable database created.

7.  Check the status of the PDB 

set lines 200
col open_mode for a15
col name for a10
SELECT name,open_mode,recovery_status FROM v$pdbs;

NAME           OPEN_MODE       RECOVERY
----------          -----------------        --------- --------
PDB$SEED   MOUNTED           ENABLED
BUZZ18C       MOUNTED          ENABLED

8. Open the database in upgrade mode and check the status again.

ALTER pluggable DATABASE BUZZ18C   OPEN upgrade;
Pluggable database altered.

SQL> SELECT name,open_mode,recovery_status FROM v$pdbs;

NAME           OPEN_MODE       RECOVERY
-------------    -----------------    ----------------------
PDB$SEED    MOUNTED         ENABLED
BUZZ18C        MIGRATE          ENABLED

 

9.  AS you can see that the PDB was plugged and it is in MIGRATE mode in Primary CDB database. Lets check its status in CDB Standby database 

DATABASE_HOST    DB_NAME     DATABASE_ROLE    OPEN_MODE       STARTUP_TIME
------------------------------ ---------------- ---------------- -------------------- --------------------
virtualdr-19crac1.localdomain  test19cstdby1    PHYSICAL STANDBY MOUNTED      03-FEB-21
virtualdr-19crac2.localdomain  test19cstdby2    PHYSICAL STANDBY MOUNTED       03-FEB-21

SQL>  set lines 200
col open_mode for a15
col name for a10
SELECT name,open_mode,recovery_status FROM v$pdbs;

NAME       OPEN_MODE       RECOVERY
---------- --------------- --------
PDB$SEED   MOUNTED         ENABLED
BUZZ18C    MOUNTED         DISABLED

Note:    As you can see that in standby database of CDB , the status of newly created PDB is MOUNT and the RECOVERY status is DISABLED. 

If You check the alert log of standby database, you can see that the PDB datafiles are stored as 'UNAMED' . Check below.

Alert log output
----------------------

Resize operation completed for file# 3, old size 1249280K, new size 1259520K
2021-02-04T10:08:34.296502-06:00
MRP0 (PID:1903): Media Recovery Log +DATA/TEST19CSTDBY/ARCHIVELOG/2021_02_04/thread_1_seq_255.1633.1063616535
Recovery created pluggable database BUZZ18C
BUZZ18C(6):File #37 added to control file as 'UNNAMED00037'. Originally created as:
BUZZ18C(6):'+DATA/BUZZ18C/DATAFILE/system.1969.1063291795'
BUZZ18C(6):because the pluggable database was created with nostandby
BUZZ18C(6):or the tablespace belonging to the pluggable database is
BUZZ18C(6):offline.
BUZZ18C(6):File #38 added to control file as 'UNNAMED00038'. Originally created as:
BUZZ18C(6):'+DATA/BUZZ18C/DATAFILE/sysaux.1762.1063291831'
BUZZ18C(6):because the pluggable database was created with nostandby
BUZZ18C(6):or the tablespace belonging to the pluggable database is
BUZZ18C(6):offline.
BUZZ18C(6):File #39 added to control file as 'UNNAMED00039'. Originally created as:
BUZZ18C(6):'+DATA/BUZZ18C/DATAFILE/undotbs1.1756.1063291857'
BUZZ18C(6):because the pluggable database was created with nostandby
BUZZ18C(6):or the tablespace belonging to the pluggable database is
BUZZ18C(6):offline.

10) At this stage, lets focus on PRIMARY PDB database . Since the PDB database has been plugged , lets continue with the steps to convert the non-CDB into a PDB by 

connecting to the PDB and running $ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql.

 10a) Connect to Primary CDB database

DATABASE_HOST                  DB_NAME           DATABASE_ROLE    OPEN_MODE           STARTUP_TIME
------------------------------ ---------------- ---------------- -------------------- --------------------
virtual-19crac1.localdomain     test19c1         PRIMARY          READ WRITE           03-FEB-21
virtual-19crac2.localdomain     test19c2         PRIMARY          READ WRITE           03-FEB-21

 10b) Connect to PDB database as below

 CON_ID CON_NAME           OPEN MODE  RESTRICTED
--------- ------------------------    ------ ---------- ----------
2 PDB$SEED                           MOUNTED
3 BUZZ18C                             MIGRATE             YES

SQL> alter session set container=BUZZ18C;
Session altered.

 10c) Run the below script to convert non-cdb to pdb.

SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql

Generally this script wont fail, but in my case, I had one parameter 'STRING_MAX_SIZE' set in CDB level and because of this parameter my command failed at the last step. Below is the error message.

Last few lines from above command
---------------------------------
10:19:47 SQL> alter pluggable database "&pdbname" open restricted;
alter pluggable database "BUZZ18C" open restricted
                *
ERROR at line 1:
ORA-14696: MAX_STRING_SIZE migration is incomplete for pluggable database
BUZZ18C
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.9.0.0.0
[oracle@virtual-19crac1 migration]$ sqlplus / as sysdba

To address this issue,  PLEASE FOLLOW below sub steps ONLY if you got the above error. If you didnt get the above error, you can go the step 11.

So I have start the CDB primary database in upgrade mode and then open this PDB in upgrade mode and run the below script.

10c1)  Connect to primary database CDB and set cluster_database to false and bounce the database.

SQL > alter system set cluster_database=false scope=spfile sid='*';
srvctl stop database -d test19c

10c2) start the CDB primary database in upgrade mode

                SQL> startup upgrade

10c) Open the new PDB in upgrade mode

SQL>  alter pluggable database BUZZ18C open upgrade;

10c) Connect to the new PDB and run below script related to MAX_STRING_SIZE

SQL> alter session set container=BUZZ18C;
SQL> show con_name
CON_NAME
------------------------------
BUZZ18C

@?/rdbms/admin/utl32k.sql

10d) set cluster_database back to TRUE and shutdown the CDB primary

SQL> alter system set cluster_database=true scope=spfile sid='*';
SQL> shut immediate;

10e) Start the CDB primary database using srvctl 

[oracle@virtual-19crac1 migration]$ srvctl start database -d test19c
[oracle@virtual-19crac1 migration]$ srvctl status database -d test19c
Instance test19c1 is running on node virtual-19crac1
Instance test19c2 is running on node virtual-19crac2

10d) open the PDB in upgrade and re-run the noncdb_to_pdb scrip

SQL> ALTER pluggable DATABASE BUZZ18C   OPEN upgrade;
Pluggable database altered.

SQL> SELECT name,open_mode,recovery_status FROM v$pdbs;

NAME            OPEN_MODE       RECOVERY
----------           --------------------    -----------------
PDB$SEED     MOUNTED          ENABLED
BUZZ18C       MIGRATE             ENABLED

SQ>@$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql

11) Connect the to Primary CDB and open the new PDB database.

SQL> show pdbs

 CON_ID  CON_NAME                 OPEN MODE  RESTRICTED
  ----------  ----------------------------- ---------- ----------
               2  PDB$SEED                  MOUNTED
               3  BUZZ18C                     MOUNTED

               

 SQL> alter pluggable database BUZZ18c OPEN;
   Pluggable database altered.

 SQL>show pdbs

CON_ID   CON_NAME     OPEN MODE        RESTRICTED
 ----------   -----------------     ------------------       -----------------
             2 PDB$SEED         MOUNTED
             3 BUZZ18C            READ WRITE       NO

12) In the Standby Container Restore PDB from Primary as below. 

                run

                {
                set newname for pluggable database BUZZ18C to new;
|                restore pluggable database BUZZ18C from service test19c;
                }

                Output

---------------------------------------------------------------------

                [oracle@chopstdby-19crac1 ~]$ rman target /
                Recovery Manager: Release 19.0.0.0.0 - Production on Thu Feb 4 11:11:50 2021
                Version 19.9.0.0.0
                Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.
                connected to target database: TEST19C (DBID=2437448537, not open)
                RMAN> run
                {
                set newname for pluggable database BUZZ18C to new;
                restore pluggable database BUZZ18C from service test19c;

                }

                2> 3> 4> 5>

                executing command: SET NEWNAME
                Starting restore at 04-FEB-21
                using target database control file instead of recovery catalog
                allocated channel: ORA_DISK_1
                channel ORA_DISK_1: SID=71 instance=test19cstdby1 device type=DISK
                channel ORA_DISK_1: starting datafile backup set restore
                channel ORA_DISK_1: using network backup set from service test19c
                channel ORA_DISK_1: specifying datafile(s) to restore from backup set
                channel ORA_DISK_1: restoring datafile 00037 to +DATA
                channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
                channel ORA_DISK_1: starting datafile backup set restore
                channel ORA_DISK_1: using network backup set from service test19c
                channel ORA_DISK_1: specifying datafile(s) to restore from backup set
                channel ORA_DISK_1: restoring datafile 00038 to +DATA
                channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
                channel ORA_DISK_1: starting datafile backup set restore
                channel ORA_DISK_1: using network backup set from service test19c
                channel ORA_DISK_1: specifying datafile(s) to restore from backup set
                channel ORA_DISK_1: restoring datafile 00039 to +DATA
                channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
                channel ORA_DISK_1: starting datafile backup set restore
                channel ORA_DISK_1: using network backup set from service test19c
                channel ORA_DISK_1: specifying datafile(s) to restore from backup set
                channel ORA_DISK_1: restoring datafile 00040 to +DATA
                channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
                channel ORA_DISK_1: starting datafile backup set restore
                channel ORA_DISK_1: using network backup set from service test19c
                channel ORA_DISK_1: specifying datafile(s) to restore from backup set
                channel ORA_DISK_1: restoring datafile 00041 to +DATA
                channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
                Finished restore at 04-FEB-21
                RMAN>

                ---------------------------------------------------------------------

13)  Stop the log apply process on the standby database as below.

 

DGMGRL> edit database test19cstdby set state='APPLY-OFF';
Succeeded.

DGMGRL> show database test19cstdby
                Database - test19cstdby
                Role:               PHYSICAL STANDBY
                Intended State:     APPLY-OFF
                Transport Lag:      23 minutes 48 seconds (computed 8 seconds ago)
                Apply Lag:          23 minutes 47 seconds (computed 8 seconds ago)
                Average Apply Rate: (unknown)
                Real Time Query:    OFF
                Instance(s):
                test19cstdby1 (apply instance)
                test19cstdby2
                Database Warning(s):
                ORA-16855: transport lag has exceeded specified threshold
                Database Status:
                WARNING

 

14) Since the replication stopped now, lets switch the standby pdb datafiles as below.

RMAN> switch pluggable database BUZZ18C to copy;

[oracle@chopstdby-19crac1 ~]$ rman target /
Recovery Manager: Release 19.0.0.0.0 - Production on Thu Feb 4 11:18:34 2021
Version 19.9.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.
connected to target database: TEST19C (DBID=2437448537, not open)

RMAN> switch pluggable database BUZZ18C to copy;
using target database control file instead of recovery catalog
datafile 37 switched to datafile copy "+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/system.1316.1063624317"
datafile 38 switched to datafile copy "+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/sysaux.1312.1063624331"
datafile 39 switched to datafile copy "+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/undotbs1.1313.1063624339"
datafile 40 switched to datafile copy "+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/users.1306.1063624353"
datafile 41 switched to datafile copy "+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/undotbs2.1304.1063624355"
RMAN>

 

14) Enable the PDB recovery on the Standby Container

SQL> ALTER SESSION SET container=BUZZ18C;

Session altered.

SQL> ALTER pluggable DATABASE enable recovery;

Pluggable database altered.

15)  Connect to PDB database in Standby CDB and create a dynamic script to move the datafiles to ONLINE.

 15a) Generate  dynamic SQL.   

 

SQL> ALTER SESSION SET container=BUZZ18C;
Session altered.

SQL> set lines 200
scol name for a99
select name,status from v$datafile
select 'ALTER DATABASE datafile ''' || name || ''' online ;' from v$datafile;
Output
-------------------------------------------------------------------------------------------------
NAME                                                                                                STATUS
--------------------------------------------------------------------------------------------------- ---------------

+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/system.1316.1063624317                 RECOVER

+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/sysaux.1312.1063624331                 RECOVER

+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/undotbs1.1313.1063624339               RECOVER

+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/users.1306.1063624353                  RECOVER

+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/undotbs2.1304.1063624355               RECOVER

 

ALTER DATABASE datafile '+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/system.1316.1063624317' online ;

ALTER DATABASE datafile '+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/sysaux.1312.1063624331' online ;

ALTER DATABASE datafile '+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/undotbs1.1313.1063624339' online ;

ALTER DATABASE datafile '+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/users.1306.1063624353' online ;

ALTER DATABASE datafile '+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/undotbs2.1304.1063624355' online ;

15b) Now run the above ONLINE commands in PDB.

SQL> ALTER DATABASE datafile '+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/system.1316.1063624317' online ;
Database altered.

ALTER DATABASE datafile '+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/sysaux.1312.1063624331' online ;

Database altered.

 ALTER DATABASE datafile '+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/undotbs1.1313.1063624339' online ;

Database altered.

 ALTER DATABASE datafile '+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/users.1306.1063624353' online ;
Database altered.

 ALTER DATABASE datafile '+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/undotbs2.1304.1063624355' online ;

Database altered.

16) Now connect to the Standby container and restart the Apply Process as below.

DGMGRL>  edit database test19cstdby set state='APPLY-ON';

Succeeded.

    DGMGRL> show database test19cstdby;

    Database - test19c
    stdby   Role:               PHYSICAL STANDBY
     Intended State:     APPLY-ON
    Transport Lag:      35 minutes 53 seconds (computed 24 seconds ago)
     Apply Lag:          35 minutes 52 seconds (computed 24 seconds ago)
      Average Apply Rate: 23.00 KByte/s
      Real Time Query:    OFF
      Instance(s):
      test19cstdby1 (apply instance)
      test19cstdby2
      Database Warning(s):
      ORA-16853: apply lag has exceeded specified threshold
      ORA-16855: transport lag has exceeded specified threshold
      Database Status:
      WARNING

17 ) Now stop the standby CDB database and start it in READ ONLY mode  and check if its receiving the logs and if its in SYNC with primary database.

 

Hope this helps 

Thanks
Sambaiah Sammeta

  

Sunday, December 4, 2022

What is Oracle's Monthly recommended Patches and how to apply the MRP in RAC environment in rolling fashion ?




Oracle's Monthly Recommended Patches (MRP)

MRPs are a collection of one-off patches bundled together. Starting with update 19.17, Oracle will be providing these Monthly recommended patches for Linux x86-64 version. 

This MRPs will include to provide proactive patching between Release Updates and will include the fixes documented in "Oracle Database Important Recommended Patches" (My Oracle Support Doc ID 555.1), 

MRPs are cumulative patches and they don't  change the release number and they are deployed using Opatchauto and as of now , MRPs are available only on the Linux x86-64 platform.


Where to download the MRPs from?

we can download the Oracle monthly recommended patches from the note ((Doc ID 888.1))




From below, you can see that that MRPs are only available for Linux as of now.





Source Environment setup 
RAC Database : LABDBPR ( labdbpr1 and labdbpr2) 
GRID Home         : /u01/app/19.3.0.0/grid
RDBMS Home       : /u01/app/oracle/product/19.14.0.0/dbhome_1
Version                 : Oracle Database 19c EE - Production Version 19.16.0.0.0
hosts         : oraprim01 & oraprim02



Lets apply the first Monthly recommended patch for our 19c RAC database. 

1) Download the below MRP patch and unzip it in  '/u01/software' of both the nodes

p34737974_1917000DBRU_Linux-x86-64.zip








2) Check the readme of the patch for patch info.







3) Patch prequsites.













Our Opatch version is 12.2.1.33, so we do have the latest Opatch version installed already

4) Run the patch conflicts for the MRP  in all the nodes of the cluster























5) check the current RU patches applied to the oracle home.



6) Stop the oracle database running from the home to which we are going apply the MRP

Since this is a RAC database, lets stop the instance using srvctl command.

[oracle@oraprim01 software]$ srvctl status database -d labdbpr
Instance labdbpr1 is running on node oraprim01
Instance labdbpr2 is running on node oraprim02

[oracle@oraprim01 software]$ srvctl stop instance -d labdbpr -i labdbpr1

[oracle@oraprim01 software]$  srvctl status database -d labdbpr
Instance labdbpr1 is not running on node oraprim01
Instance labdbpr2 is running on node oraprim02
[oracle@oraprim01 software]$


7) Apply the patch on the first node, (prim01) where the database instance is currently down.

$ORACLE_HOME/OPatch/opatchauto apply -binary /u01/software/34737974  -oh $ORACLE_HOME -target_type oracle_database

























8) Start the instance on the node, prim01 and proceed with next step.

[oracle@oraprim01 34737974]$  srvctl status database -d labdbpr
Instance labdbpr1 is not running on node oraprim01
Instance labdbpr2 is running on node oraprim02
[oracle@oraprim01 34737974]$ srvctl start  instance -d labdbpr -i labdbpr1
[oracle@oraprim01 34737974]$  srvctl status database -d labdbpr
Instance labdbpr1 is running on node oraprim01
Instance labdbpr2 is running on node oraprim02
[oracle@oraprim01 34737974]$

9) Stop the 2nd instance of the database on node, prim02 and apply the patch on this node .

[oracle@oraprim02 34737974]$ srvctl status database -d labdbpr
Instance labdbpr1 is running on node oraprim01
Instance labdbpr2 is running on node oraprim02
[oracle@oraprim02 34737974]$ srvctl stop  instance -d labdbpr -i labdbpr2
[oracle@oraprim02 34737974]$ srvctl status database -d labdbpr
Instance labdbpr1 is running on node oraprim01
Instance labdbpr2 is not running on node oraprim02
[oracle@oraprim02 34737974]$

10) Apply the patch on the 2nd node (prim02) as shown below.

$ORACLE_HOME/OPatch/opatchauto apply -binary /u01/software/34737974  -oh $ORACLE_HOME -target_type oracle_database



























11) Start the instance on the node, prim02.

[oracle@oraprim02 34737974]$ srvctl start instance -d labdbpr -i labdbpr2
srvctl status database -d labdbpr
[oracle@oraprim02 34737974]$ srvctl status database -d labdbpr
Instance labdbpr1 is running on node oraprim01
Instance labdbpr2 is running on node oraprim02
[oracle@oraprim02 34737974]$
[oracle@oraprim02 34737974]$

12) Do we need 'datapatch' after applying the MRP patch?

Always read the README for any pre or post patch steps.

As per the README of the MRP patch, we dont have to run datapatch after the patch has been applied to the oracle database home. The reason for this is that MRP 19.17.0.0.221115 does not include any SQL changes and hence datapatch does not need to be run post installation. May be they will include the SQLs in the future MRPs and then we might need to run the datapatch.

13) How to check if the MRPs are applied or what MRPs are applied in the existing Oracle database home?

unfortunately, neither 'opatch lsinventory' or dba_registry gives any info on the MRPs' . lsinventory command gives the list of all the 

[oracle@oraprim01 34737974]$  $ORACLE_HOME/OPatch/opatch lsinv | grep 34737974
[oracle@oraprim01 34737974]$












If you check the lsinventory, it will give all the patches that have been applied with this MRP.





















May be Oracle will include the MRP patch info in the inventory in future releases or they provide a way to check to see if a specific MRP is applied or not.

Thanks for reading this post.

Thanks
Sambaiah Sammeta