Search This Blog

Tuesday, June 20, 2023

Wednesday, May 17, 2023

Active Data Guard DML Redirection Feature in 19c

 


















In this post we will see how we can we DML Redirection feature in Oracle 19c Active Data Guard Standby.

Primary Source Environment setup

RAC Database : RENODBPR ( renodbpr1 & renodbpr2)
PDB                        : ONEPDB
GRID Home : /u01/app/19.3.0.0/grid
RDBMS Home    : /u01/app/oracle/product/19.3.0.0/db_1
Version         : Oracle Database 19c EE - Production Version 19.17.0.0.0
hosts : labhost01
                          labhost02

Standby database environment setup

RAC Database : RENODBDR ( renodbdr1 & renodbdr2)
GRID Home : /u01/app/19.3.0.0/grid
RDBMS Home    : /u01/app/oracle/product/19.3.0.0/dbhome_1
Version         : Oracle Database 19c EE - Production Version 19.17.0.0.0
hosts : labdrhost01
                          labdrhost02

What is DML redirection feature ?
DML Redirection is new feature in Oracle 19c which allows DML operations to be executed on an Active Data Guard standby database.

How it works :  When a user performs a DML operation in the Active Standby database, the DML operation is actually passed to the Primary database where it is executed and the redo generated from that particular DML operation will be shipped and applied in the Standby database and after that the control is returned to the user who ran the command.

Lets see how it works.

We have this schema, appuser in the pluggable database, onepdb.

SQL> show pdbs
    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 ONEPDB                         READ WRITE YES
SQL>


we have a table with name, APP_TAB1 in the pdb, one_pdb.

On Primary database:















On Standby database
















without  enabling DML redirection feature, when we run insert operation in the standby database, it will error out as expected.












Enable DML redirection at session level  by using  below command 
 alter session enable adg_redirect_dml; 
and retry the insert operation.






























Even though it works fine, this might not helpful for all the application as internally all the DMLs are being redirected to Primary database where its getting executed and the Standby database is getting updated with the redo data that it receives from the primary database for that DML transaction.

May be it will be useful for applications who perform mostly the read operations and very minimum write operations.


Thanks
Sambaiah Sammeta





Friday, April 21, 2023

Oracle OUD error : ORA-28030: Server encountered problems accessing LDAP directory service

  













Recently we started observing this issue where when we try to connect OUD integrated Oracle database and got the below exception..

"dbhost01:/u01/app/oracle->sqlplus globaluser1@TESTDB
SQL*Plus: Release 12.1.0.2.0 Production on Fri Mar 3 18:31:16 2023
Copyright (c) 1982, 2014, Oracle. All rights reserved.

Enter password:
ERROR:
ORA-28030: Server encountered problems accessing LDAP directory service"


First we need to identify the exact cause that's forcing the connection to give this particular error. 

Enable the tracing as shown below in the oracle database and then try the connection.

1) Enable 28033 event tracing using below SQL.

SQL> alter system set events '28033 trace name context forever, level 9';

2) Run the sqlplus connection again.

sqlplus globaluser1@TESTDB

3) Disable the tracing using below SQL.

SQL> alter system set events '28033 trace name context off';

4) Check the dump directory for the trace it generated 

In my case, it generated below trace.

/oracle/app/diag/rdbms/testdb1/TESTDB/trace/TESTDB1_ora_142721.trc










Below is the output of this trace file.


As you can see from the 2nd line that its not able to get the correct credentials from the wallet.

kzld_discover received ldaptype: OID
KZLD_ERR: failed to get cred from wallet     <--------------------------------------------
KZLD_ERR: Failed to bind to LDAP server. Err=28032
KZLD_ERR: 28032
KZLD is doing LDAP unbind
KZLD_ERR: found err from kzldini.


This error means that either wallet location is not correct in the sqlner.ora or the wallet doesnt have correct password.

In My case, my sqlnet.ora had wrong wallet location mentioned and once I corrected , I was able to connect the database.

Hope this helps incase if you run into similar issue.

Thanks
Sambaiah Sammeta

Wednesday, April 19, 2023

Dataguard switchover error - ORA-16597: Oracle Data Guard broker detects two or more primary databases
















I ran into below error when performed a switchover operation in my dataguard configuration which has one primary and two standby databases.

"ORA-16597: Oracle Data Guard broker detects two or more primary databases"

Please see below

 [oracle@dbhost01 ]$ dgmgrl /
DGMGRL for Linux: Release 19.0.0.0.0 - Production on Wed Apr 19 15:13:05 2023
Version 19.19.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.
Welcome to DGMGRL, type "help" for information.
Connected to "tureepr"
Connected as SYSDG.
DGMGRL> connect sys/ringrose;
Connected to "tureepr"
Connected as SYSDBA.
DGMGRL> show configuration;
Configuration - tureepr_cfg

  Protection Mode: MaxPerformance
  Members:
  tureepr - Primary database
    tureedr - Physical standby database
    tureetr - Physical standby database
Fast-Start Failover:  Disabled
Configuration Status:
SUCCESS   (status updated 35 seconds ago)

Performing the swithcover operation.

DGMGRL> switchover to tureetr;
Performing switchover NOW, please wait...
Operation requires a connection to database "tureetr"
Connecting ...
Connected to "tureetr"
Connected as SYSDBA.
New primary database "tureetr" is opening...
Oracle Clusterware is restarting database "tureepr" ...
Connected to "tureepr"
Switchover succeeded, new primary is "tureetr"

DGMGRL>

DGMGRL> show configuration;
Configuration - tureepr_cfg
  Protection Mode: MaxPerformance
  Members:
  tureetr - Primary database
    tureepr - Physical standby database
    tureedr - Physical standby database (disabled)
      ORA-16597: Oracle Data Guard broker detects two or more primary databases
Fast-Start Failover:  Disabled
Configuration Status:
SUCCESS   (status updated 60 seconds ago)
DGMGRL> 

Even though the switchover was completed successfully, for some reason, I see one of the second standby database in disabled state and I also see below error.
   ORA-16597: Oracle Data Guard broker detects two or more primary databases

First, I went ahead and enabled the second standby database which got disabled during the switchover operation

DGMGRL> enable database tureedr
Enabled.
DGMGRL> 

Surprisingly  When I checked the configuration, I saw that the configuration came back to normal, wondering how and why ...

DGMGRL> show configuration;

Configuration - tureepr_cfg
  Protection Mode: MaxPerformance
  Members:
  tureetr - Primary database
    tureepr - Physical standby database
    tureedr - Physical standby database

Fast-Start Failover:  Disabled
Configuration Status:
SUCCESS   (status updated 52 seconds ago)
DGMGRL>

anyone has any idea why the below error popped up?
   ORA-16597: Oracle Data Guard broker detects two or more primary databases


Thanks
Sambaiah Sammeta

Applying April, 2023 Release update on a 19c 2-node RAC environment

 




























In this post, we will see how we can apply the 19.19 RU to both the Database and Grid homes in our 19c RAC Lab environment.

19.19 RU details
35037840 - GI Release Update 19.19.0.0.230418
35042068 - Database Release Update 19.19.0.0.230418 

My Lab environment is a 2 node RAC cluster with patch set 19.17 applied. Below is the current RU version applied to the existing GI and DB homes

Source Environment setup
RAC Database : RENODBPR ( renodbpr1 & renodbpr2)
GRID Home : /u01/app/19.3.0.0/grid
RDBMS Home    : /u01/app/oracle/product/19.3.0.0/dbhome_1
Version         : Oracle Database 19c EE - Production Version 19.17.0.0.0
hosts : labhost01
                          labhost02

Grid Home Current patch level

[oracle@labhost01 software]$ echo $ORACLE_HOME
/u01/app/19.3.0.0/grid
[oracle@labhost01 software]$ $ORACLE_HOME/OPatch/opatch lspatches
34580338;TOMCAT RELEASE UPDATE 19.0.0.0.0 (34580338)
34444834;OCW RELEASE UPDATE 19.17.0.0.0 (34444834)
34428761;ACFS RELEASE UPDATE 19.17.0.0.0 (34428761)
34419443;Database Release Update : 19.17.0.0.221018 (34419443)
33575402;DBWLM RELEASE UPDATE 19.0.0.0.0 (33575402)
OPatch succeeded.
[oracle@labhost01 software]$

Database home current patch level

[oracle@labhost01 software]$ echo $ORACLE_HOME
/u01/app/oracle/product/19.3.0.0/db_1
[oracle@labhost01 software]$ $ORACLE_HOME/OPatch/opatch lspatches
34444834;OCW RELEASE UPDATE 19.17.0.0.0 (34444834)
34419443;Database Release Update : 19.17.0.0.221018 (34419443)
OPatch succeeded.
[oracle@labhost01 software]$

Download the Grid Patch , Database patch and the latest OPatch from Oracle and stage it on all the nodes of the cluster(if its RAC)

35037840 - GI Release Update 19.19.0.0.230418
35042068 - Database Release Update 19.19.0.0.230418 
p6880880_190000_Linux-x86-64.zip   - Latest Opatch for 19c, we need 12.2.0.1.36 to apply 19.19 RU.

The Oracle Grid Infrastructure and  Database patches are cumulative and include the database CPU program security content.

Patch Installation 

It is highly recommended to take a backup of the Oracle home binaries, the Grid home binaries, and Central Inventory prior to applying patches


Patch Installation prerequisites

1. Opatch utility version information

You must use the OPatch utility version 12.2.0.1.36 or later to apply this patch. Oracle recommends that you use the latest released OPatch version for 12.2 which is available for download from My Oracle Support note 6880880 by selecting the ARU link for the 12.2.0.1.0 OPatch release.

Check the current OPatch version on the server

[oracle@labhost01 software]$ $ORACLE_HOME/OPatch/opatch version
OPatch Version: 12.2.0.1.35
OPatch succeeded.
[oracle@labhost01 software]$

Download and unzip the latest OPatch utility to both the Grid and Database homes.
Grid home : 

Node 1

Perform the below steps as root user
[root@labhost01 ~]#  mv /u01/app/19.3.0.0/grid/OPatch /u01/app/19.3.0.0/grid/OPatch.orig1
[root@labhost01 ~]# unzip p6880880_190000_Linux-x86-64.zip -d /u01/app/19.3.0.0/grid
[root@labhost01 ~]# chown -Rf oracle:dba /u01/app/19.3.0.0/grid/OPatch

let's check the OPatch version to see if its updated.

[oracle@labhost01 software]$ $ORACLE_HOME/OPatch/opatch version
OPatch Version: 12.2.0.1.36
OPatch succeeded.
[oracle@labhost01 software]$

Node2

[root@labhost02 ~]#  mv /u01/app/19.3.0.0/grid/OPatch /u01/app/19.3.0.0/grid/OPatch.orig
[root@labhost02 ~]# unzip p6880880_190000_Linux-x86-64.zip -d /u01/app/19.3.0.0/grid
[root@labhost02 ~]# chown -Rf oracle:dba /u01/app/19.3.0.0/grid/OPatch

let's check the OPatch version to see if its updated.
[oracle@labhost02 software]$ $ORACLE_HOME/OPatch/opatch version
OPatch Version: 12.2.0.1.36
OPatch succeeded.
[oracle@labhost02 software]$

Database home 

Node 1:

[oracle@labhost01 software]$mv /u01/app/oracle/product/19.3.0.0/db_1/OPatch /u01/app/oracle/product/19.3.0.0/db_1/OPatch.orig
[oracle@labhost01 software]$ unzip p6880880_190000_Linux-x86-64.zip -d /u01/app/oracle/product/19.3.0.0/db_1/

[oracle@labhost01 software]$ . oraenv
ORACLE_SID = [+ASM1] ? renodbpr1
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@labhost01 software]$ echo $ORACLE_HOME
/u01/app/oracle/product/19.3.0.0/db_1
[oracle@labhost01 software]$  $ORACLE_HOME/OPatch/opatch version
OPatch Version: 12.2.0.1.36
OPatch succeeded.
[oracle@labhost01 software]$


Node 2:
[oracle@labhost02 software]$mv /u01/app/oracle/product/19.3.0.0/db_1/OPatch /u01/app/oracle/product/19.3.0.0/db_1/OPatch.orig
[oracle@labhost02 software]$ unzip p6880880_190000_Linux-x86-64.zip -d /u01/app/oracle/product/19.3.0.0/db_1/

[oracle@labhost02 software]$ . oraenv
ORACLE_SID = [+ASM2] ? renodbpr2
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@labhost02 software]$ echo $ORACLE_HOME
/u01/app/oracle/product/19.3.0.0/db_1
[oracle@labhost02 software]$  $ORACLE_HOME/OPatch/opatch version
OPatch Version: 12.2.0.1.36
OPatch succeeded.
[oracle@labhost02 software]$

2. Check the consistency for the Oracle inventory of both Grid and Database homes

Before we begin apply any patches, it is always strongly  recommended to check the consistency of the inventory information for both the Grid home and Oracle homes where we are applying the patch. 

$ <ORACLE_HOME>/OPatch/opatch lsinventory -detail -oh <ORACLE_HOME>

If this command succeeds, it lists the Oracle components that are installed in the home. Save the output so that you have the status prior to the patch application. However if this command fails, contact Oracle Support Services for assistance.

In our case, its checks out without any issues. Below is formatted output as the output from above command is very huge.

























3. Run the Opatch conflict check 

Determine whether any currently installed one-off patches conflict with this patch 34416665 as follows:

Run OPatch Conflict Check

For Grid Infrastructure Home, as oracle user:

export ORACLE_HOME=/u01/app/19.3.0.0/grid
export PATH=$ORACLE_HOME/bin:$PATH

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/35037840/35042068
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/35037840/35050331
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/35037840/35050325
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/35037840/35107512
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/35037840/33575402

Below is the formatted output of above commands from the node1.

























For Oracle database home, as home user:

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/35037840/35042068
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/software/35037840/35050331

Below is the formatted output of above commands from the node1.






















Note : Repeat the steps for all the nodes of the cluster.

4) Run the OPatch System Space Check

Check if enough free space is available on the ORACLE_HOME filesystem for the patches to be applied as given below:

For Grid Infrastructure home, as home user:
Create file /tmp/patch_list_gihome.txt with the following content:

/u01/software/35037840/35042068
/u01/software/35037840/35050331
/u01/software/35037840/35050325
/u01/software/35037840/35107512
/u01/software/35037840/33575402

Run the OPatch command to check if enough free space is available in the Grid Infrastructure home:

$ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_gihome.txt



For the database home , create file /tmp/patch_list_db_home.txt with below content.
/u01/software/35037840/35042068
/u01/software/35037840/35050331

Run the OPatch command to check if enough free space is available in the Database home as below.

$ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_db_home.txt


















5) One-off Patch Conflict Detection and Resolution


The following commands check for conflicts in both the 19c Grid home and the 19c DB homes.
In case you are applying the patch, run this command:

As root user,
[root@labhost01 ~]# export ORACLE_HOME=/u01/app/19.3.0.0/grid
[root@labhost01 ~]# export PATH=$ORACLE_HOME/bin:$PATH
[root@labhost01 ~]# $ORACLE_HOME/OPatch/opatchauto apply /u01/software/35037840 -analyze












6) Patch Installation 


Lets apply the patch to both Grid and DB homes using 'Opatchauto' 

The OPatch utility has automated the patch application for the Oracle Grid Infrastructure (Grid) home and the Oracle RAC database homes. It operates by querying existing configurations and automating the steps required for patching each Oracle RAC database home of same version and the Grid home.
The utility must be executed by an operating system (OS) user with root privileges, and it must be executed on each node in the cluster if the Grid home or Oracle RAC database home is in non-shared storage. 

To patch the Grid home and all Oracle RAC database homes of the same version: we can use the below command.

export ORACLE_HOME=/u01/app/19.3.0.0/grid
export PATH=$ORACLE_HOME/bin:$PATH
$ORACLE_HOME/OPatch/opatchauto apply /u01/software/35037840

Node 1: 

























Check the applied patches for the database and grid homes.























Repeat the same steps in all the nodes of the cluster.

Please note that the datapatch will run when the last node gets patched.

7) Check the DBA registry of the database










With this the patching 19c with 19.19 RU is completed.

I did ran into one issue when the patch was applying for the grid home, its discussed in below post, please check.

http://myoracle-world.blogspot.com/2023/04/grid-patch-failed-with-checksystemspace.html

Thanks
Sambaiah Sammeta


Grid patch failed with "CheckSystemSpace" error

 
















Today I was apply the 19.19 RU to my LAB RAC environment and after applying the patch for database home, the opatchauto failed with the 'checkSystemspace' error while applying the patch to grid home.

I did ran the 'CheckSystemSpace' check before applying the patch and it completed successfully and it didnt raised any flag with space.

May the space got filled in the cfgtools during the patching which resulted in the space issue. 

Anyhow, I cleaned up space the space and resumed the opatchauto as below and this time it went through.






































As you can see, this time it completed without any issues.


Thanks
Sambaiah Sammeta




Tuesday, April 18, 2023

April, 2023 Release Update is out now

 














Oracle had released the April,2023 Release update for the available versions. Below are the details of patches for 19c versions.

Patch           -     Description

-----------            -----------------------------------------------------

35037840     -     GI Release Update 19.19.0.0.230418
35042068     -     Database Release Update 19.19.0.0.230418
35050331     -     OCW Release Update 19.19.0.0.230418
35050325     -     ACFS Release Update 19.19.0.0.230418
35107512     -     Tomcat Release Update 19.0.0.0.0
33575402     -     DBWLM Release Update 19.0.0.0.0

In the next blog post, we will see how to apply the 19.19 RU to the 19c RAC environment.


Thanks
Sambaiah Sammeta

Tuesday, March 28, 2023

Oracle dataguard errors : ORA-16853 / ORA-16853:

 













I was getting below error when I ran the 'show configuration' command from the Datagaurd broker.

DGMGRL>  show configuration;

Configuration - renodbdg
  Protection Mode: MaxPerformance
  Members:
  renodbpr - Primary database
    renodbdr - Physical standby database
      Warning: ORA-16809: multiple warnings detected for the member
Fast-Start Failover:  Disabled
Configuration Status:
WARNING   
DGMGRL> 

I ran the 'show database ' command for the standby database and found below errors. 

DGMGRL>  show database renodbdr
Database - renodbdr

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      1 hour(s) 14 minutes 35 seconds (computed 0 seconds ago)
  Apply Lag:          1 hour(s) 15 minutes (computed 0 seconds ago)
  Average Apply Rate: 2.83 MByte/s
  Real Time Query:    ON
  Instance(s):
    renodbdr1 (apply instance)
    renodbdr2
  Database Warning(s):
    ORA-16853: apply lag has exceeded specified threshold
    ORA-16855: transport lag has exceeded specified threshold

Database Status:
WARNING
DGMGRL> 

Cause : The standby database is RAC database and I see that the SRL logs were only created for one instance.







As we can see from the above, all the Standby redo logs were created for the first instance. 

Our Prim,ary database has 2 redo logs for each node. I will drop the last three standby redo logs from the standby database and will re-create them for the thread 2.

1. Stop the MRP process

SQL> ALTER DATABASE RECOVER  managed standby database cancel;
Database altered.

2. Drop the unwanted standby redo logs.

SQL> alter database drop standby logfile group 8;
Database altered.
SQL> alter database drop standby logfile group 9;
Database altered.
SQL> alter database drop standby logfile group 10;
Database altered.

3) Add the SRLs for the thread 2.

SQL> alter database add standby logfile thread 2 group 8 size 200M;
Database altered.
SQL> alter database add standby logfile thread 2 group 9  size 200M;
Database altered.
SQL> alter database add standby logfile thread 2 group 10  size 200M;
Database altered.

Check the database to see if the SRLs shows correct








4. Start the MRP process

SQL> ALTER DATABASE RECOVER  managed standby database using current logfile disconnect;
Database altered.

5. Check the configuration in the dataguard broker.

DGMGRL for Linux: Release 19.0.0.0.0 - Production on Tue Mar 28 10:51:52 2023Version 19.17.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.
Welcome to DGMGRL, type "help" for information.
Connected to "renodbpr"
Connected as SYSDG.
DGMGRL> connect sys/welcome;
Connected to "renodbpr"
Connected as SYSDBA.
DGMGRL> show configuration;
Configuration - renodbdg
  Protection Mode: MaxPerformance
  Members:
  renodbpr - Primary database
    renodbdr - Physical standby database
Fast-Start Failover:  Disabled
Configuration Status:
SUCCESS   (status updated 26 seconds ago)

DGMGRL> 

that fixed the issue and the configuration looks good in the Dataguard broker.

hope this helps you if you are seeing the same issue :)

Thanks
Sambaiah Sammeta

Oracle Grid and Database RU patch roll-back process

 


In this post, we will see how to roll back the 19.18 RU to 19.17 from the Grid and Database homes.

For this, we will use the below setup with 19c RAC environment for the primary and Standby environments.

Primary Source Environment setup

RAC Database    : RENODBPR ( renodbpr1 & renodbpr2)
GRID Home        : /u01/app/19.3.0.0/grid
RDBMS Home     : /u01/app/oracle/product/19.3.0.0/db_1
Version              : Oracle Database 19c EE - Production Version 19.17.0.0.0
hosts                 : labhost01
                          labhost02

Standby database environment setup
RAC Database    : RENODBDR ( renodbdr1 & renodbdr2)
GRID Home        : /u01/app/19.3.0.0/grid
RDBMS Home    : /u01/app/oracle/product/19.3.0.0/dbhome_1
Version             : Oracle Database 19c EE - Production Version 19.17.0.0.0
hosts                  : labdrhost01
                              labdrhost02

Both the primary database and standby database have 19.18 RU applied.

Primary database patch set level.
[oracle@labhost01 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
34768559;OCW RELEASE UPDATE 19.18.0.0.0 (34768559)
34765931;DATABASE RELEASE UPDATE : 19.18.0.0.230117 (REL-JAN230131) (34765931)
OPatch succeeded.
[oracle@labhost01 ~]$

Standby database patchset level.
[oracle@labdrhost01 trace]$ cd
[oracle@labdrhost01 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
34768559;OCW RELEASE UPDATE 19.18.0.0.0 (34768559)
34765931;DATABASE RELEASE UPDATE : 19.18.0.0.230117 (REL-JAN230131) (34765931)
OPatch succeeded.
 [oracle@labdrhost01 ~]$

Patch roll-back process.

1.  Make sure that the standby database is in sync with the primary database .

2. download the 19.18 RU and stage it on the server.

    Patch 34762026 - GI Release Update 19.18.0.0.230117 (REL-JAN230131)

3. Rollback the patch 

As root user, we can rollback the patch using opatchauto. 

We should rollback the patch in the Standby database servers first and then roll-back it from the Primary database servers.

export ORACLE_HOME=/u01/app/19.3.0.0/grid
export PATH=$ORACLE_HOME/bin:$PATH
$ORACLE_HOME/OPatch/opatchauto rollback /u01/software/334762026

Rollback step on the node 1 of the Standby database server.

[root@labdrhost01 34762026]#  $ORACLE_HOME/OPatch/opatchauto rollback     /u01/software/34762026
   
OPatchauto session is initiated at Mon Mar 27 14:06:00 2023
 
System initialization log file is /u01/app/19.3.0.0/grid/cfgtoollogs/opatchautodb/systemconfig2023-   03-27_02-06-02PM.log.
   
Session log file is /u01/app/19.3.0.0/grid/cfgtoollogs/opatchauto/opatchauto2023-03-27_02-06-   25PM.log
The id for this session is JU2B
    
Executing OPatch prereq operations to verify patch applicability on home /u01/app/19.3.0.0/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19.3.0.0/dbhome_1
Patch applicability verified successfully on home /u01/app/19.3.0.0/grid

Patch applicability verified successfully on home /u01/app/oracle/product/19.3.0.0/dbhome_1

Executing patch validation checks on home /u01/app/19.3.0.0/grid
Patch validation checks successfully completed on home /u01/app/19.3.0.0/grid

Executing patch validation checks on home /u01/app/oracle/product/19.3.0.0/dbhome_1
Patch validation checks successfully completed on home /u01/app/oracle/product/19.3.0.0/dbhome_1

Verifying SQL patch applicability on home /u01/app/oracle/product/19.3.0.0/dbhome_1
Skipping SQL patch step execution on standby database : renodbdr
SQL patch applicability verified successfully on home /u01/app/oracle/product/19.3.0.0/dbhome_1

Preparing to bring down database service on home /u01/app/oracle/product/19.3.0.0/dbhome_1
Successfully prepared home /u01/app/oracle/product/19.3.0.0/dbhome_1 to bring down database service

Performing prepatch operations on CRS - bringing down CRS service on home /u01/app/19.3.0.0/grid
Prepatch operation log file location: /u01/app/oracle/crsdata/labdrhost01/crsconfig/crs_prepatch_rollback_inplace_labdrhost01_2023-03-27_02-09-31PM.log
CRS service brought down successfully on home /u01/app/19.3.0.0/grid


Performing prepatch operation on home /u01/app/oracle/product/19.3.0.0/dbhome_1
Prepatch operation completed successfully on home /u01/app/oracle/product/19.3.0.0/dbhome_1

Start rolling back binary patch on home /u01/app/oracle/product/19.3.0.0/dbhome_1
Binary patch rolled back successfully on home /u01/app/oracle/product/19.3.0.0/dbhome_1

Performing postpatch operation on home /u01/app/oracle/product/19.3.0.0/dbhome_1
Postpatch operation completed successfully on home /u01/app/oracle/product/19.3.0.0/dbhome_1

Start rolling back binary patch on home /u01/app/19.3.0.0/grid
Binary patch rolled back successfully on home /u01/app/19.3.0.0/grid

Performing postpatch operations on CRS - starting CRS service on home /u01/app/19.3.0.0/grid
Postpatch operation log file location: /u01/app/oracle/crsdata/labdrhost01/crsconfig/crs_postpatch_rollback_inplace_labdrhost01_2023-03-27_02-21-43PM.log
CRS service started successfully on home /u01/app/19.3.0.0/grid

Preparing home /u01/app/oracle/product/19.3.0.0/dbhome_1 after database service restarted
No step execution required.........

Trying to roll back SQL patch on home /u01/app/oracle/product/19.3.0.0/dbhome_1
Skipping SQL patch step execution on standby database : renodbdr
No sqlpatch operations are required on the local node for this home
SQL patch rolled back successfully on home /u01/app/oracle/product/19.3.0.0/dbhome_1

OPatchAuto successful.
--------------------------------Summary--------------------------------
Patching is completed successfully. Please find the summary as follows:
Host:labdrhost01
RAC Home:/u01/app/oracle/product/19.3.0.0/dbhome_1
Version:19.0.0.0.0
Summary:
==Following patches were SKIPPED:
Patch: /u01/software/34762026/34768569
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /u01/software/34762026/33575402
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /u01/software/34762026/34863894
Reason: This patch is not applicable to this specified target type - "rac_database"

==Following patches were SUCCESSFULLY rolled back:

Patch: /u01/software/34762026/34768559
Log: /u01/app/oracle/product/19.3.0.0/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2023-03-27_14-14-28PM_1.log
Patch: /u01/software/34762026/34765931
Log: /u01/app/oracle/product/19.3.0.0/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2023-03-27_14-14-28PM_1.log
Host:labdrhost01
CRS Home:/u01/app/19.3.0.0/grid
Version:19.0.0.0.0
Summary:

==Following patches were SKIPPED:

Patch: /u01/software/34762026/33575402
Reason: Patch /u01/software/34762026/33575402 is not applied as part of bundle patch 34762026

==Following patches were SUCCESSFULLY rolled back:

Patch: /u01/software/34762026/34768559
Log: /u01/app/19.3.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2023-03-27_14-18-13PM_1.log

Patch: /u01/software/34762026/34768569
Log: /u01/app/19.3.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2023-03-27_14-18-13PM_1.log

Patch: /u01/software/34762026/34863894
Log: /u01/app/19.3.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2023-03-27_14-18-13PM_1.log

Patch: /u01/software/34762026/34765931
Log: /u01/app/19.3.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2023-03-27_14-18-13PM_1.log

OPatchauto session completed at Mon Mar 27 14:24:38 2023
Time taken to complete the session 18 minutes, 38 seconds
[root@labdrhost01 34762026]# 

Rolling back the patch on the 2nd node of the Standby database server.


[root@labdrhost02 ~]# export ORACLE_HOME=/u01/app/19.3.0.0/grid
[root@labdrhost02 ~]# export PATH=$ORACLE_HOME/bin:$PATH
[root@labdrhost02 ~]# $ORACLE_HOME/OPatch/opatchauto rollback /u01/software/34762026

OPatchauto session is initiated at Mon Mar 27 14:24:56 2023

System initialization log file is /u01/app/19.3.0.0/grid/cfgtoollogs/opatchautodb/systemconfig2023-03-27_02-24-58PM.log.

Session log file is /u01/app/19.3.0.0/grid/cfgtoollogs/opatchauto/opatchauto2023-03-27_02-25-22PM.log
The id for this session is NAGH

Executing OPatch prereq operations to verify patch applicability on home /u01/app/19.3.0.0/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19.3.0.0/dbhome_1
Patch applicability verified successfully on home /u01/app/19.3.0.0/grid

Patch applicability verified successfully on home /u01/app/oracle/product/19.3.0.0/dbhome_1

Executing patch validation checks on home /u01/app/19.3.0.0/grid
Patch validation checks successfully completed on home /u01/app/19.3.0.0/grid

Executing patch validation checks on home /u01/app/oracle/product/19.3.0.0/dbhome_1
Patch validation checks successfully completed on home /u01/app/oracle/product/19.3.0.0/dbhome_1

Verifying SQL patch applicability on home /u01/app/oracle/product/19.3.0.0/dbhome_1
Skipping SQL patch step execution on standby database : renodbdr
SQL patch applicability verified successfully on home /u01/app/oracle/product/19.3.0.0/dbhome_1

Preparing to bring down database service on home /u01/app/oracle/product/19.3.0.0/dbhome_1
Successfully prepared home /u01/app/oracle/product/19.3.0.0/dbhome_1 to bring down database service

Performing prepatch operations on CRS - bringing down CRS service on home /u01/app/19.3.0.0/grid
Prepatch operation log file location: /u01/app/oracle/crsdata/labdrhost02/crsconfig/crs_prepatch_rollback_inplace_labdrhost02_2023-03-27_02-28-42PM.log
CRS service brought down successfully on home /u01/app/19.3.0.0/grid

Performing prepatch operation on home /u01/app/oracle/product/19.3.0.0/dbhome_1
Prepatch operation completed successfully on home /u01/app/oracle/product/19.3.0.0/dbhome_1

Start rolling back binary patch on home /u01/app/oracle/product/19.3.0.0/dbhome_1
Binary patch rolled back successfully on home /u01/app/oracle/product/19.3.0.0/dbhome_1

Performing postpatch operation on home /u01/app/oracle/product/19.3.0.0/dbhome_1
Postpatch operation completed successfully on home /u01/app/oracle/product/19.3.0.0/dbhome_1

Start rolling back binary patch on home /u01/app/19.3.0.0/grid
Binary patch rolled back successfully on home /u01/app/19.3.0.0/grid

Performing postpatch operations on CRS - starting CRS service on home /u01/app/19.3.0.0/grid
Postpatch operation log file location: /u01/app/oracle/crsdata/labdrhost02/crsconfig/crs_postpatch_rollback_inplace_labdrhost02_2023-03-27_02-41-07PM.log
CRS service started successfully on home /u01/app/19.3.0.0/grid

Preparing home /u01/app/oracle/product/19.3.0.0/dbhome_1 after database service restarted
No step execution required.........

Trying to roll back SQL patch on home /u01/app/oracle/product/19.3.0.0/dbhome_1
Skipping SQL patch step execution on standby database : renodbdr
No sqlpatch operations are required on the local node for this home
SQL patch rolled back successfully on home /u01/app/oracle/product/19.3.0.0/dbhome_1

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:labdrhost02
RAC Home:/u01/app/oracle/product/19.3.0.0/dbhome_1
Version:19.0.0.0.0
Summary:

==Following patches were SKIPPED:
Patch: /u01/software/34762026/34768569
Reason: This patch is not applicable to this specified target type - "rac_database"
Patch: /u01/software/34762026/33575402
Reason: This patch is not applicable to this specified target type - "rac_database"
Patch: /u01/software/34762026/34863894
Reason: This patch is not applicable to this specified target type - "rac_database"

==Following patches were SUCCESSFULLY rolled back:

Patch: /u01/software/34762026/34768559
Log: /u01/app/oracle/product/19.3.0.0/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2023-03-27_14-33-36PM_1.log

Patch: /u01/software/34762026/34765931
Log: /u01/app/oracle/product/19.3.0.0/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2023-03-27_14-33-36PM_1.log

Host:labdrhost02
CRS Home:/u01/app/19.3.0.0/grid
Version:19.0.0.0.0
Summary:

==Following patches were SKIPPED:

Patch: /u01/software/34762026/33575402
Reason: Patch /u01/software/34762026/33575402 is not applied as part of bundle patch 34762026

==Following patches were SUCCESSFULLY rolled back:

Patch: /u01/software/34762026/34768559
Log: /u01/app/19.3.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2023-03-27_14-37-25PM_1.log

Patch: /u01/software/34762026/34768569
Log: /u01/app/19.3.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2023-03-27_14-37-25PM_1.log

Patch: /u01/software/34762026/34863894
Log: /u01/app/19.3.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2023-03-27_14-37-25PM_1.log

Patch: /u01/software/34762026/34765931
Log: /u01/app/19.3.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2023-03-27_14-37-25PM_1.log

OPatchauto session completed at Mon Mar 27 14:44:10 2023
Time taken to complete the session 19 minutes, 15 seconds
[root@labdrhost02 ~]#


Rolling back the patch on the First node of the Primary database server.

[root@labhost01 ~]#  export ORACLE_HOME=/u01/app/19.3.0.0/grid
[root@labhost01 ~]#  export PATH=$ORACLE_HOME/bin:$PATH
[root@labhost01 ~]# [root@labhost01 ~]# $ORACLE_HOME/OPatch/opatchauto rollback /u01/software/34762026

OPatchauto session is initiated at Mon Mar 27 14:24:58 2023

System initialization log file is /u01/app/19.3.0.0/grid/cfgtoollogs/opatchautodb/systemconfig2023-03-27_02-25-00PM.log.

Session log file is /u01/app/19.3.0.0/grid/cfgtoollogs/opatchauto/opatchauto2023-03-27_02-25-22PM.log
The id for this session is FW7Q

Executing OPatch prereq operations to verify patch applicability on home /u01/app/19.3.0.0/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19.3.0.0/db_1
Patch applicability verified successfully on home /u01/app/19.3.0.0/grid

Patch applicability verified successfully on home /u01/app/oracle/product/19.3.0.0/db_1

Executing patch validation checks on home /u01/app/19.3.0.0/grid
Patch validation checks successfully completed on home /u01/app/19.3.0.0/grid

Executing patch validation checks on home /u01/app/oracle/product/19.3.0.0/db_1
Patch validation checks successfully completed on home /u01/app/oracle/product/19.3.0.0/db_1

Verifying SQL patch applicability on home /u01/app/oracle/product/19.3.0.0/db_1
SQL patch applicability verified successfully on home /u01/app/oracle/product/19.3.0.0/db_1

Preparing to bring down database service on home /u01/app/oracle/product/19.3.0.0/db_1
Successfully prepared home /u01/app/oracle/product/19.3.0.0/db_1 to bring down database service

Performing prepatch operations on CRS - bringing down CRS service on home /u01/app/19.3.0.0/grid
Prepatch operation log file location: /u01/app/oracle/crsdata/labhost01/crsconfig/crs_prepatch_rollback_inplace_labhost01_2023-03-27_02-29-52PM.log
CRS service brought down successfully on home /u01/app/19.3.0.0/grid


Performing prepatch operation on home /u01/app/oracle/product/19.3.0.0/db_1
Prepatch operation completed successfully on home /u01/app/oracle/product/19.3.0.0/db_1

Start rolling back binary patch on home /u01/app/oracle/product/19.3.0.0/db_1
Binary patch rolled back successfully on home /u01/app/oracle/product/19.3.0.0/db_1

Performing postpatch operation on home /u01/app/oracle/product/19.3.0.0/db_1
Postpatch operation completed successfully on home /u01/app/oracle/product/19.3.0.0/db_1

Start rolling back binary patch on home /u01/app/19.3.0.0/grid
Binary patch rolled back successfully on home /u01/app/19.3.0.0/grid

Performing postpatch operations on CRS - starting CRS service on home /u01/app/19.3.0.0/grid
Postpatch operation log file location: /u01/app/oracle/crsdata/labhost01/crsconfig/crs_postpatch_rollback_inplace_labhost01_2023-03-27_02-42-19PM.log
CRS service started successfully on home /u01/app/19.3.0.0/grid


Preparing home /u01/app/oracle/product/19.3.0.0/db_1 after database service restarted
No step execution required.........


Trying to roll back SQL patch on home /u01/app/oracle/product/19.3.0.0/db_1
No sqlpatch operations are required on the local node for this home
SQL patch rolled back successfully on home /u01/app/oracle/product/19.3.0.0/db_1

OPatchAuto successful.
--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:labhost01
RAC Home:/u01/app/oracle/product/19.3.0.0/db_1
Version:19.0.0.0.0
Summary:
==Following patches were SKIPPED:

Patch: /u01/software/34762026/34768569
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /u01/software/34762026/33575402
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /u01/software/34762026/34863894
Reason: This patch is not applicable to this specified target type - "rac_database"

==Following patches were SUCCESSFULLY rolled back:

Patch: /u01/software/34762026/34768559
Log: /u01/app/oracle/product/19.3.0.0/db_1/cfgtoollogs/opatchauto/core/opatch/opatch2023-03-27_14-34-52PM_1.log
Patch: /u01/software/34762026/34765931
Log: /u01/app/oracle/product/19.3.0.0/db_1/cfgtoollogs/opatchauto/core/opatch/opatch2023-03-27_14-34-52PM_1.log

Host:labhost01
CRS Home:/u01/app/19.3.0.0/grid
Version:19.0.0.0.0
Summary:
==Following patches were SKIPPED:

Patch: /u01/software/34762026/33575402
Reason: Patch /u01/software/34762026/33575402 is not applied as part of bundle patch 34762026

==Following patches were SUCCESSFULLY rolled back:

Patch: /u01/software/34762026/34768559
Log: /u01/app/19.3.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2023-03-27_14-38-34PM_1.log
Patch: /u01/software/34762026/34768569
Log: /u01/app/19.3.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2023-03-27_14-38-34PM_1.log
Patch: /u01/software/34762026/34863894
Log: /u01/app/19.3.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2023-03-27_14-38-34PM_1.log
Patch: /u01/software/34762026/34765931
Log: /u01/app/19.3.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2023-03-27_14-38-34PM_1.log

OPatchauto session completed at Mon Mar 27 14:45:24 2023
Time taken to complete the session 20 minutes, 26 seconds
[root@labhost01 ~]# 


Rolling back the patch on the second node of the Primary database server.

[root@labhost02 ~]#  export ORACLE_HOME=/u01/app/19.3.0.0/grid
[root@labhost02 ~]#  export PATH=$ORACLE_HOME/bin:$PATH
[root@labhost02 ~]# $ORACLE_HOME/OPatch/opatchauto rollback /u01/software/34762026

OPatchauto session is initiated at Mon Mar 27 14:59:19 2023

System initialization log file is /u01/app/19.3.0.0/grid/cfgtoollogs/opatchautodb/systemconfig2023-03-27_02-59-25PM.log.

Session log file is /u01/app/19.3.0.0/grid/cfgtoollogs/opatchauto/opatchauto2023-03-27_02-59-48PM.log
The id for this session is CF35

Executing OPatch prereq operations to verify patch applicability on home /u01/app/19.3.0.0/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19.3.0.0/db_1
Patch applicability verified successfully on home /u01/app/19.3.0.0/grid

Patch applicability verified successfully on home /u01/app/oracle/product/19.3.0.0/db_1

Executing patch validation checks on home /u01/app/19.3.0.0/grid
Patch validation checks successfully completed on home /u01/app/19.3.0.0/grid

Executing patch validation checks on home /u01/app/oracle/product/19.3.0.0/db_1
Patch validation checks successfully completed on home /u01/app/oracle/product/19.3.0.0/db_1

Verifying SQL patch applicability on home /u01/app/oracle/product/19.3.0.0/db_1
SQL patch applicability verified successfully on home /u01/app/oracle/product/19.3.0.0/db_1


Preparing to bring down database service on home /u01/app/oracle/product/19.3.0.0/db_1
Successfully prepared home /u01/app/oracle/product/19.3.0.0/db_1 to bring down database service

Performing prepatch operations on CRS - bringing down CRS service on home /u01/app/19.3.0.0/grid
Prepatch operation log file location: /u01/app/oracle/crsdata/labhost02/crsconfig/crs_prepatch_rollback_inplace_labhost02_2023-03-27_03-03-11PM.log
CRS service brought down successfully on home /u01/app/19.3.0.0/grid

Performing prepatch operation on home /u01/app/oracle/product/19.3.0.0/db_1
Prepatch operation completed successfully on home /u01/app/oracle/product/19.3.0.0/db_1

Start rolling back binary patch on home /u01/app/oracle/product/19.3.0.0/db_1
Binary patch rolled back successfully on home /u01/app/oracle/product/19.3.0.0/db_1

Performing postpatch operation on home /u01/app/oracle/product/19.3.0.0/db_1
Postpatch operation completed successfully on home /u01/app/oracle/product/19.3.0.0/db_1

Start rolling back binary patch on home /u01/app/19.3.0.0/grid
Binary patch rolled back successfully on home /u01/app/19.3.0.0/grid

Performing postpatch operations on CRS - starting CRS service on home /u01/app/19.3.0.0/grid
Postpatch operation log file location: /u01/app/oracle/crsdata/labhost02/crsconfig/crs_postpatch_rollback_inplace_labhost02_2023-03-27_03-15-17PM.log
CRS service started successfully on home /u01/app/19.3.0.0/grid


Preparing home /u01/app/oracle/product/19.3.0.0/db_1 after database service restarted
No step execution required.........


Trying to roll back SQL patch on home /u01/app/oracle/product/19.3.0.0/db_1
SQL patch rolled back successfully on home /u01/app/oracle/product/19.3.0.0/db_1
OPatchAuto successful.
--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:labhost02
RAC Home:/u01/app/oracle/product/19.3.0.0/db_1
Version:19.0.0.0.0
Summary:
==Following patches were SKIPPED:

Patch: /u01/software/34762026/34768569
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /u01/software/34762026/33575402
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /u01/software/34762026/34863894
Reason: This patch is not applicable to this specified target type - "rac_database"

==Following patches were SUCCESSFULLY rolled back:

Patch: /u01/software/34762026/34768559
Log: /u01/app/oracle/product/19.3.0.0/db_1/cfgtoollogs/opatchauto/core/opatch/opatch2023-03-27_15-08-03PM_1.log

Patch: /u01/software/34762026/34765931
Log: /u01/app/oracle/product/19.3.0.0/db_1/cfgtoollogs/opatchauto/core/opatch/opatch2023-03-27_15-08-03PM_1.log

Host:labhost02
CRS Home:/u01/app/19.3.0.0/grid
Version:19.0.0.0.0
Summary:

==Following patches were SKIPPED:

Patch: /u01/software/34762026/33575402
Reason: Patch /u01/software/34762026/33575402 is not applied as part of bundle patch 34762026

==Following patches were SUCCESSFULLY rolled back:

Patch: /u01/software/34762026/34768559
Log: /u01/app/19.3.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2023-03-27_15-11-44PM_1.log

Patch: /u01/software/34762026/34768569
Log: /u01/app/19.3.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2023-03-27_15-11-44PM_1.log

Patch: /u01/software/34762026/34863894
Log: /u01/app/19.3.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2023-03-27_15-11-44PM_1.log

Patch: /u01/software/34762026/34765931
Log: /u01/app/19.3.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2023-03-27_15-11-44PM_1.log

OPatchauto session completed at Mon Mar 27 15:22:15 2023
Time taken to complete the session 22 minutes, 57 seconds
[root@labhost02 ~]#

4. Run the datapatch for each of the primary database. In our case, we will run the datapatch on node 1 of primary database for the database, renodbpr.

[oracle@labhost01 OPatch]$ ./datapatch -verbose
SQL Patching tool version 19.17.0.0.0 Production on Mon Mar 27 16:29:16 2023
Copyright (c) 2012, 2022, Oracle.  All rights reserved.

Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_10770_2023_03_27_16_29_16/sqlpatch_invocation.log

Connecting to database...OK
Gathering database info...done

Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)

Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of interim SQL patches:
  No interim patches found

Current state of release update SQL patches:
  Binary registry:
    19.17.0.0.0 Release_Update 220924224051: Installed
  PDB CDB$ROOT:
    Rolled back to 19.17.0.0.0 Release_Update 220924224051 successfully on 27-MAR-23 03.22.10.246894 PM
  PDB ONEPDB:
    Applied 19.17.0.0.0 Release_Update 220924224051 successfully on 18-JAN-23 09.02.58.264135 PM
  PDB PDB$SEED:
    Rolled back to 19.17.0.0.0 Release_Update 220924224051 successfully on 27-MAR-23 03.22.10.530825 PM

Adding patches to installation queue and performing prereq checks...done
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED ONEPDB
    No interim patches need to be rolled back
    No release update patches need to be installed
    No interim patches need to be applied

SQL Patching tool complete on Mon Mar 27 16:29:57 2023
[oracle@labhost01 OPatch]$

5. Check the lsinventory or lapatches for each node

[oracle@labhost02 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
34580338;TOMCAT RELEASE UPDATE 19.0.0.0.0 (34580338)
34444834;OCW RELEASE UPDATE 19.17.0.0.0 (34444834)
34428761;ACFS RELEASE UPDATE 19.17.0.0.0 (34428761)
34419443;Database Release Update : 19.17.0.0.221018 (34419443)
33575402;DBWLM RELEASE UPDATE 19.0.0.0.0 (33575402)
OPatch succeeded.
[oracle@labhost02 ~]$

6.For each database or each PDB (if we have) we can run the utlrp if we see any invalid objects.

7. check the standby database to ensure that they are getting logs applied and it is in SYNC with primary database.

that's it, we completed roll-back of 19.18 PSU on RAC database with Standby database configuration.

Hope this document helps you.

Thanks
Sambaiah Sammeta


Wednesday, February 8, 2023

Roll Forward Physical Standby Using RMAN Incremental Backup in Single Command (almost :))




 












In this post, les see how we can roll forward the Physical standby database when there is a huge gap in standby database sync. 

Whenever there is a hug gap in Physical standby database, we can simply roll forward it and bring the physical standby database in sync with primary with incremental backup that we can take it in the primary database and apply it to the standby database.

For 11g , we can use the below document

11g Steps to perform for Rolling Forward a Physical Standby Database using RMAN Incremental Backup. (Doc ID 836986.1)

For 12.1 and 12.2, we can use below document.
12c How to Roll Forward a Standby Database Using Recover Database From Service (Doc ID 2850185.1) 

For versions 18c and above, refer to this document.
Roll Forward Physical Standby Using RMAN Incremental Backup in Single Command (Doc ID 2431311.1)

Since our version is 19c, we are going to use below document .
Roll Forward Physical Standby Using RMAN Incremental Backup in Single Command (Doc ID 2431311.1)

Primary Source Environment setup

RAC Database : RENODBPR ( renodbpr1 & renodbpr2)
GRID Home : /u01/app/19.3.0.0/grid
RDBMS Home    : /u01/app/oracle/product/19.3.0.0/db_1
Version         : Oracle Database 19c EE - Production Version 19.17.0.0.0
hosts : labhost01
                          labhost02

Standby database environment setup
RAC Database : RENODBDR ( renodbdr1 & renodbdr2)
GRID Home : /u01/app/19.3.0.0/grid
RDBMS Home    : /u01/app/oracle/product/19.3.0.0/dbhome_1
Version         : Oracle Database 19c EE - Production Version 19.17.0.0.0
hosts : labdrhost01
                          labdrhost02

Both the primary database and standby database have 19.18 RU applied.

Primary database patch set level.
[oracle@labhost01 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
34768559;OCW RELEASE UPDATE 19.18.0.0.0 (34768559)
34765931;DATABASE RELEASE UPDATE : 19.18.0.0.230117 (REL-JAN230131) (34765931)
OPatch succeeded.

[oracle@labhost01 ~]$

Standby database patchset level.
[oracle@labdrhost01 trace]$ cd
[oracle@labdrhost01 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
34768559;OCW RELEASE UPDATE 19.18.0.0.0 (34768559)
34765931;DATABASE RELEASE UPDATE : 19.18.0.0.230117 (REL-JAN230131) (34765931)
OPatch succeeded.
[oracle@labdrhost01 ~]$

if we check the standby database for lag, we can see that its beyond the Primary database and its waiting for the log, 







From the alert log of the standby database, we can see that its waiting for log sequence 62-87 







Lets see how to re-sync the physical standby database with the steps mentioned in the below document.

Roll Forward Physical Standby Using RMAN Incremental Backup in Single Command (Doc ID 2431311.1)

1.  Stop the RAC database and start only one instance in mount mode.



2. Stop the Managed recovery process if its started

SQL> select name,open_mode from gv$database;
NAME OPEN_MODE
--------- --------------------
RENODBPR MOUNTED

SQL> ALTER DATABASE RECOVER managed standby database cancel;
Database altered.

SQL>

3.  Test connecting the primary database from the phsyical standby database using its service name.




4. Connect to the RMAN target and recover the phsycial standby database using below command 

"RECOVER STANDBY DATABASE FROM SERVICE"
In our case command will be '"RECOVER STANDBY DATABASE FROM RENODBPR"

[oracle@labdrhost01 ~]$ rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Wed Feb 8 15:50:31 2023
Version 19.18.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: RENODBPR (DBID=68216390, not open)

RMAN> RECOVER STANDBY DATABASE FROM SERVICE RENODBPR;

Starting recover at 08-FEB-23
using target database control file instead of recovery catalog
Oracle instance started

Total System Global Area    3053450480 bytes

Fixed Size                     9168112 bytes
Variable Size                687865856 bytes
Database Buffers            2348810240 bytes
Redo Buffers                   7606272 bytes

contents of Memory Script:
{
   restore standby controlfile from service  'RENODBPR';
   alter database mount standby database;
}
executing Memory Script

Starting restore at 08-FEB-23
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=58 instance=renodbdr1 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service RENODBPR
channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
output file name=+DATA/RENODBDR/CONTROLFILE/current.258.1128095897
output file name=+DATA/RENODBDR/CONTROLFILE/current.259.1128095897
Finished restore at 08-FEB-23

released channel: ORA_DISK_1
Statement processed

contents of Memory Script:
{
set newname for tempfile  1 to
 "+DATA/RENODBDR/TEMPFILE/temp.316.1128097179";
set newname for tempfile  2 to
 "+DATA/RENODBDR/F2960A86AEBC354FE0530A0278C09105/TEMPFILE/temp.317.1128097183";
set newname for clone tempfile  3 to new;
   switch tempfile all;
set newname for datafile  1 to
 "+DATA/RENODBDR/DATAFILE/system.260.1128095905";
set newname for datafile  3 to
 "+DATA/RENODBDR/DATAFILE/sysaux.261.1128095911";
set newname for datafile  4 to
 "+DATA/RENODBDR/DATAFILE/undotbs1.262.1128095919";
set newname for datafile  5 to
 "+DATA/RENODBDR/F2960A86AEBC354FE0530A0278C09105/DATAFILE/system.263.1128095923";
set newname for datafile  6 to
 "+DATA/RENODBDR/F2960A86AEBC354FE0530A0278C09105/DATAFILE/sysaux.264.1128095925";
set newname for datafile  7 to
 "+DATA/RENODBDR/DATAFILE/users.265.1128095929";
set newname for datafile  8 to
 "+DATA/RENODBDR/F2960A86AEBC354FE0530A0278C09105/DATAFILE/undotbs1.266.1128095929";
set newname for datafile  9 to
 "+DATA/RENODBDR/DATAFILE/undotbs2.267.1128095933";
set newname for datafile  10 to
 "+DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/system.268.1128095933";
set newname for datafile  11 to
 "+DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/sysaux.269.1128095937";
set newname for datafile  12 to
 "+DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/undotbs1.270.1128095941";
set newname for datafile  13 to
 "+DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/undo_2.271.1128095941";
set newname for datafile  14 to
 "+DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/users.272.1128095943";
set newname for datafile  15 to
 "+DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/tbs1.273.1128095943";
set newname for datafile  16 to
 "+DATA/RENODBDR/DATAFILE/tbs123.274.1128095945";
set newname for datafile  17 to
 "+DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/tbs2.275.1128095947";
   catalog datafilecopy  "+DATA/RENODBDR/DATAFILE/system.260.1128095905",
 "+DATA/RENODBDR/DATAFILE/sysaux.261.1128095911",
 "+DATA/RENODBDR/DATAFILE/undotbs1.262.1128095919",
 "+DATA/RENODBDR/F2960A86AEBC354FE0530A0278C09105/DATAFILE/system.263.1128095923",
 "+DATA/RENODBDR/F2960A86AEBC354FE0530A0278C09105/DATAFILE/sysaux.264.1128095925",
 "+DATA/RENODBDR/DATAFILE/users.265.1128095929",
 "+DATA/RENODBDR/F2960A86AEBC354FE0530A0278C09105/DATAFILE/undotbs1.266.1128095929",
 "+DATA/RENODBDR/DATAFILE/undotbs2.267.1128095933",
 "+DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/system.268.1128095933",
 "+DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/sysaux.269.1128095937",
 "+DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/undotbs1.270.1128095941",
 "+DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/undo_2.271.1128095941",
 "+DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/users.272.1128095943",
 "+DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/tbs1.273.1128095943",
 "+DATA/RENODBDR/DATAFILE/tbs123.274.1128095945",
 "+DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/tbs2.275.1128095947";
   switch datafile all;
}
executing Memory Script

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting implicit crosscheck backup at 08-FEB-23
allocated channel: ORA_DISK_1
Crosschecked 6 objects
Finished implicit crosscheck backup at 08-FEB-23

Starting implicit crosscheck copy at 08-FEB-23
using channel ORA_DISK_1
Finished implicit crosscheck copy at 08-FEB-23

searching for all files in the recovery area
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: +DATA/RENODBDR/AUTOBACKUP/2023_02_07/s_1128178754.344.1128178953
File Name: +DATA/RENODBDR/AUTOBACKUP/2023_02_06/s_1128096941.328.1128097767
File Name: +DATA/RENODBDR/AUTOBACKUP/2023_02_06/s_1128098761.333.1128098763
File Name: +DATA/RENODBDR/TEMPFILE/temp.316.1128097179
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_1_seq_55.345.1128213443
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_1_seq_56.346.1128249453
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_2_seq_44.347.1128249455
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_1_seq_57.348.1128249455
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_2_seq_45.349.1128249461
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_1_seq_58.350.1128249461
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_1_seq_59.351.1128249461
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_1_seq_60.352.1128249465
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_2_seq_46.353.1128249467
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_1_seq_61.354.1128249603
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_2_seq_47.355.1128249605
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_1_seq_88.356.1128250035
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_2_seq_61.357.1128250039
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_1_seq_89.358.1128250039
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_1_seq_90.359.1128250043
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_2_seq_62.360.1128250045
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_1_seq_91.361.1128250045
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_2_seq_63.362.1128250047
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_1_seq_92.363.1128250049
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_2_seq_64.364.1128250051
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_1_seq_93.365.1128250051
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_2_seq_65.366.1128250053
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_1_seq_94.367.1128250055
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_2_seq_66.368.1128250057
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_1_seq_95.369.1128250057
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_2_seq_67.370.1128250059
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_1_seq_96.371.1128250061
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_2_seq_68.372.1128250063
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_1_seq_97.373.1128250063
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_2_seq_69.374.1128250067
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_1_seq_98.375.1128250067
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_1_seq_99.376.1128250071
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_2_seq_70.377.1128250073
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_1_seq_100.378.1128250073
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_1_seq_101.379.1128250075
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_08/thread_2_seq_71.380.1128250075
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_07/thread_1_seq_52.338.1128153619
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_07/thread_2_seq_41.339.1128153621
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_07/thread_1_seq_53.340.1128178355
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_07/thread_2_seq_42.341.1128178355
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_07/thread_2_seq_43.342.1128178357
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_07/thread_1_seq_54.343.1128178359
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_1_seq_35.300.1128096773
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_1_seq_36.301.1128096775
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_2_seq_24.302.1128096777
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_2_seq_25.303.1128096779
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_1_seq_37.304.1128096881
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_1_seq_38.305.1128096883
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_1_seq_39.306.1128096887
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_1_seq_40.307.1128096929
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_1_seq_41.308.1128096931
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_1_seq_42.309.1128096935
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_1_seq_43.310.1128096937
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_1_seq_44.311.1128096941
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_2_seq_26.312.1128096943
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_2_seq_27.313.1128096943
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_2_seq_28.314.1128096945
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_2_seq_29.315.1128096947
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_1_seq_45.318.1128097511
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_1_seq_46.319.1128097545
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_2_seq_30.320.1128097729
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_2_seq_31.321.1128097731
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_2_seq_32.322.1128097731
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_2_seq_33.323.1128097733
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_2_seq_34.324.1128097735
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_2_seq_35.325.1128097735
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_2_seq_36.326.1128097737
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_2_seq_37.327.1128097737
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_1_seq_47.329.1128098421
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_2_seq_38.330.1128098421
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_2_seq_39.331.1128098447
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_1_seq_48.332.1128098447
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_1_seq_49.334.1128099489
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_1_seq_50.335.1128099489
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_2_seq_40.336.1128099491
File Name: +DATA/RENODBDR/ARCHIVELOG/2023_02_06/thread_1_seq_51.337.1128099523
File Name: +DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/system.268.1128095933
File Name: +DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/sysaux.269.1128095937
File Name: +DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/undotbs1.270.1128095941
File Name: +DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/undo_2.271.1128095941
File Name: +DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/users.272.1128095943
File Name: +DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/tbs1.273.1128095943
File Name: +DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/tbs2.275.1128095947
File Name: +DATA/RENODBDR/F2960A86AEBC354FE0530A0278C09105/TEMPFILE/temp.317.1128097183
File Name: +DATA/RENODBDR/F2960A86AEBC354FE0530A0278C09105/DATAFILE/system.263.1128095923
File Name: +DATA/RENODBDR/F2960A86AEBC354FE0530A0278C09105/DATAFILE/sysaux.264.1128095925
File Name: +DATA/RENODBDR/F2960A86AEBC354FE0530A0278C09105/DATAFILE/undotbs1.266.1128095929
File Name: +DATA/RENODBDR/DATAFILE/system.260.1128095905
File Name: +DATA/RENODBDR/DATAFILE/sysaux.261.1128095911
File Name: +DATA/RENODBDR/DATAFILE/undotbs1.262.1128095919
File Name: +DATA/RENODBDR/DATAFILE/users.265.1128095929
File Name: +DATA/RENODBDR/DATAFILE/undotbs2.267.1128095933
File Name: +DATA/RENODBDR/DATAFILE/tbs123.274.1128095945

renamed tempfile 1 to +DATA/RENODBDR/TEMPFILE/temp.316.1128097179 in control file
renamed tempfile 2 to +DATA/RENODBDR/F2960A86AEBC354FE0530A0278C09105/TEMPFILE/temp.317.1128097183 in control file
renamed tempfile 3 to +DATA in control file

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

cataloged datafile copy
datafile copy file name=+DATA/RENODBDR/DATAFILE/sysaux.261.1128095911 RECID=20 STAMP=1128268275
cataloged datafile copy
datafile copy file name=+DATA/RENODBDR/DATAFILE/system.260.1128095905 RECID=21 STAMP=1128268275
cataloged datafile copy
datafile copy file name=+DATA/RENODBDR/DATAFILE/undotbs1.262.1128095919 RECID=22 STAMP=1128268281
cataloged datafile copy
datafile copy file name=+DATA/RENODBDR/F2960A86AEBC354FE0530A0278C09105/DATAFILE/system.263.1128095923 RECID=23 STAMP=1128268281
cataloged datafile copy
datafile copy file name=+DATA/RENODBDR/F2960A86AEBC354FE0530A0278C09105/DATAFILE/sysaux.264.1128095925 RECID=24 STAMP=1128268287
cataloged datafile copy
datafile copy file name=+DATA/RENODBDR/DATAFILE/users.265.1128095929 RECID=25 STAMP=1128268287
cataloged datafile copy
datafile copy file name=+DATA/RENODBDR/F2960A86AEBC354FE0530A0278C09105/DATAFILE/undotbs1.266.1128095929 RECID=26 STAMP=1128268293
cataloged datafile copy
datafile copy file name=+DATA/RENODBDR/DATAFILE/undotbs2.267.1128095933 RECID=27 STAMP=1128268293
cataloged datafile copy
datafile copy file name=+DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/system.268.1128095933 RECID=28 STAMP=1128268299
cataloged datafile copy
datafile copy file name=+DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/sysaux.269.1128095937 RECID=29 STAMP=1128268299
cataloged datafile copy
datafile copy file name=+DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/undotbs1.270.1128095941 RECID=30 STAMP=1128268305
cataloged datafile copy
datafile copy file name=+DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/undo_2.271.1128095941 RECID=31 STAMP=1128268305
cataloged datafile copy
datafile copy file name=+DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/users.272.1128095943 RECID=32 STAMP=1128268312
cataloged datafile copy
datafile copy file name=+DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/tbs1.273.1128095943 RECID=33 STAMP=1128268312
cataloged datafile copy
datafile copy file name=+DATA/RENODBDR/DATAFILE/tbs123.274.1128095945 RECID=34 STAMP=1128268318
cataloged datafile copy
datafile copy file name=+DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/tbs2.275.1128095947 RECID=35 STAMP=1128268318

datafile 1 switched to datafile copy
input datafile copy RECID=21 STAMP=1128268275 file name=+DATA/RENODBDR/DATAFILE/system.260.1128095905
datafile 3 switched to datafile copy
input datafile copy RECID=20 STAMP=1128268275 file name=+DATA/RENODBDR/DATAFILE/sysaux.261.1128095911
datafile 4 switched to datafile copy
input datafile copy RECID=22 STAMP=1128268281 file name=+DATA/RENODBDR/DATAFILE/undotbs1.262.1128095919
datafile 5 switched to datafile copy
input datafile copy RECID=23 STAMP=1128268281 file name=+DATA/RENODBDR/F2960A86AEBC354FE0530A0278C09105/DATAFILE/system.263.1128095923
datafile 6 switched to datafile copy
input datafile copy RECID=24 STAMP=1128268287 file name=+DATA/RENODBDR/F2960A86AEBC354FE0530A0278C09105/DATAFILE/sysaux.264.1128095925
datafile 7 switched to datafile copy
input datafile copy RECID=25 STAMP=1128268287 file name=+DATA/RENODBDR/DATAFILE/users.265.1128095929
datafile 8 switched to datafile copy
input datafile copy RECID=26 STAMP=1128268293 file name=+DATA/RENODBDR/F2960A86AEBC354FE0530A0278C09105/DATAFILE/undotbs1.266.1128095929
datafile 9 switched to datafile copy
input datafile copy RECID=27 STAMP=1128268293 file name=+DATA/RENODBDR/DATAFILE/undotbs2.267.1128095933
datafile 10 switched to datafile copy
input datafile copy RECID=28 STAMP=1128268299 file name=+DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/system.268.1128095933
datafile 11 switched to datafile copy
input datafile copy RECID=29 STAMP=1128268299 file name=+DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/sysaux.269.1128095937
datafile 12 switched to datafile copy
input datafile copy RECID=30 STAMP=1128268305 file name=+DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/undotbs1.270.1128095941
datafile 13 switched to datafile copy
input datafile copy RECID=31 STAMP=1128268305 file name=+DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/undo_2.271.1128095941
datafile 14 switched to datafile copy
input datafile copy RECID=32 STAMP=1128268312 file name=+DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/users.272.1128095943
datafile 15 switched to datafile copy
input datafile copy RECID=33 STAMP=1128268312 file name=+DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/tbs1.273.1128095943
datafile 16 switched to datafile copy
input datafile copy RECID=34 STAMP=1128268318 file name=+DATA/RENODBDR/DATAFILE/tbs123.274.1128095945
datafile 17 switched to datafile copy
input datafile copy RECID=35 STAMP=1128268318 file name=+DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/tbs2.275.1128095947
Executing: alter database rename file '+DATA/RENODBPR/ONLINELOG/group_1.263.1126471369' to '+DATA/RENODBDR/ONLINELOG/group_1.276.1128095951'
Executing: alter database rename file '+DATA/RENODBPR/ONLINELOG/group_1.266.1126471371' to '+DATA/RENODBDR/ONLINELOG/group_1.277.1128095951'
Executing: alter database rename file '+DATA/RENODBPR/ONLINELOG/group_2.264.1126471369' to '+DATA/RENODBDR/ONLINELOG/group_2.278.1128095951'
Executing: alter database rename file '+DATA/RENODBPR/ONLINELOG/group_2.265.1126471371' to '+DATA/RENODBDR/ONLINELOG/group_2.279.1128095951'
Executing: alter database rename file '+DATA/RENODBPR/ONLINELOG/group_3.273.1126472631' to '+DATA/RENODBDR/ONLINELOG/group_3.280.1128095951'
Executing: alter database rename file '+DATA/RENODBPR/ONLINELOG/group_3.274.1126472631' to '+DATA/RENODBDR/ONLINELOG/group_3.281.1128095953'
Executing: alter database rename file '+DATA/RENODBPR/ONLINELOG/group_4.275.1126472631' to '+DATA/RENODBDR/ONLINELOG/group_4.282.1128095953'
Executing: alter database rename file '+DATA/RENODBPR/ONLINELOG/group_4.276.1126472631' to '+DATA/RENODBDR/ONLINELOG/group_4.283.1128095953'

contents of Memory Script:
{
  recover database from service  'RENODBPR';
}
executing Memory Script

Starting recover at 08-FEB-23
using channel ORA_DISK_1
skipping datafile 5; already restored to SCN 3943135
skipping datafile 6; already restored to SCN 3943135
skipping datafile 8; already restored to SCN 3943135
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service RENODBPR
destination for restore of datafile 00001: +DATA/RENODBDR/DATAFILE/system.260.1128095905
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service RENODBPR
destination for restore of datafile 00003: +DATA/RENODBDR/DATAFILE/sysaux.261.1128095911
channel ORA_DISK_1: restore complete, elapsed time: 00:00:04
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service RENODBPR
destination for restore of datafile 00004: +DATA/RENODBDR/DATAFILE/undotbs1.262.1128095919
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service RENODBPR
destination for restore of datafile 00007: +DATA/RENODBDR/DATAFILE/users.265.1128095929
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service RENODBPR
destination for restore of datafile 00009: +DATA/RENODBDR/DATAFILE/undotbs2.267.1128095933
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service RENODBPR
destination for restore of datafile 00010: +DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/system.268.1128095933
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service RENODBPR
destination for restore of datafile 00011: +DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/sysaux.269.1128095937
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service RENODBPR
destination for restore of datafile 00012: +DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/undotbs1.270.1128095941
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service RENODBPR
destination for restore of datafile 00013: +DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/undo_2.271.1128095941
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service RENODBPR
destination for restore of datafile 00014: +DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/users.272.1128095943
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service RENODBPR
destination for restore of datafile 00015: +DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/tbs1.273.1128095943
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service RENODBPR
destination for restore of datafile 00016: +DATA/RENODBDR/DATAFILE/tbs123.274.1128095945
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service RENODBPR
destination for restore of datafile 00017: +DATA/RENODBDR/F29636F2F7466178E0530A0278C024A1/DATAFILE/tbs2.275.1128095947
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01

starting media recovery

media recovery complete, elapsed time: 00:00:00
Finished recover at 08-FEB-23
Finished recover at 08-FEB-23

RMAN>

Once the restore is completed ,I saw that the standby database is all synced up. But I also saw that the standby redo logs in the standby database were incorrect one(they are the ones from Primary database)

Errors in file /u01/app/oracle/diag/rdbms/renodbdr/renodbdr1/trace/renodbdr1_rsm0_5984.trc:
ORA-00313: open failed for members of log group 5 of thread 1
ORA-00312: online log 5 thread 1: '+DATA/RENODBPR/ONLINELOG/group_5.335.1128095697'
ORA-17503: ksfdopn:2 Failed to open file +DATA/RENODBPR/ONLINELOG/group_5.335.1128095697
ORA-15012: ASM file '+DATA/RENODBPR/ONLINELOG/group_5.335.1128095697' does not exist
2023-02-08T16:44:06.725386-06:00
Errors in file /u01/app/oracle/diag/rdbms/renodbdr/renodbdr1/trace/renodbdr1_rsm0_5984.trc:
ORA-00313: open failed for members of log group 6 of thread 1
ORA-00312: online log 6 thread 1: '+DATA/RENODBPR/ONLINELOG/group_6.336.1128095703'
ORA-17503: ksfdopn:2 Failed to open file +DATA/RENODBPR/ONLINELOG/group_6.336.1128095703
ORA-15012: ASM file '+DATA/RENODBPR/ONLINELOG/group_6.336.1128095703' does not exist

So I had perform extra step of dropping the Standby redo logs and adding them back in the Standby database. Once I did this , I was able to open the Standby database in 'READ ONLY' mode with MRP apply running .

5. On Standby database, drop and recreate the standby redo logs

SQL> select group#, thread#, bytes/1024/1024 MB  from  v$standby_log;

    GROUP#    THREAD#         MB
---------- ---------- ----------
         5          1        200
         6          1        200
         7          1        200
         8          2        200
         9          2        200
        10          2        200

6 rows selected.

SQL> ALTER DATABASE RECOVER  managed standby database cancel;
Database altered.

SQL> alter database drop standby logfile group 5;
Database altered.
SQL> alter database drop standby logfile group 6;
Database altered.
SQL> alter database drop standby logfile group 7;
Database altered.
SQL> alter database drop standby logfile group 8;
Database altered.
SQL> alter database drop standby logfile group 9;
Database altered.
SQL> alter database drop standby logfile group 10;
Database altered.
SQL> alter database add standby logfile thread 1 group 5 ('+DATA') size 209715200;
Database altered.
SQL>  alter database add standby logfile thread 1 group 6 ('+DATA') size 209715200;
Database altered.
SQL>  alter database add standby logfile thread 1 group 7 ('+DATA') size 209715200;
Database altered.
SQL>  alter database add standby logfile thread 1 group 8 ('+DATA') size 209715200;
Database altered.
SQL>  alter database add standby logfile thread 1 group 9 ('+DATA') size 209715200;
Database altered.
SQL> alter database add standby logfile thread 1 group 10 ('+DATA') size 209715200;
Database altered.

SQL>  select group#, thread#, bytes/1024/1024 MB  from  v$standby_log;

    GROUP#    THREAD#         MB
---------- ---------- ----------
         5          1        200
         6          1        200
         7          1        200
         8          1        200
         9          1        200
        10          1        200

6 rows selected.

SQL> 

6. Stop the Standby database and start it 
[oracle@labdrhost01 ~]$ srvctl stop database -d renodbdr
[oracle@labdrhost01 ~]$ srvctl start database -d renodbdr


7. Check the Lag in standby database and see if the MRP is applying the logs to keep it in sync.











You can also check the alert log of the standby database and see the MRP is applying the logs 









I liked this one step of recovery as the old method (until 12c) when compared to what we used to do for older version where we had to perform few more steps to re-sync the physical standby database using 'Rolling forward using Incremental backup" approach.

Hope this helps.



Thanks
Sambaiah Sammeta