Search This Blog

Wednesday, March 24, 2021


Oracle Grid Rolling Upgrade from 18c to 19c.
  

 

   


We can use this document to upgrade grid infrastructure from 18c to 19c in rolling fashion.

In this post, We will be upgrading oracle 18c to 19.8.0.0 (19c with July,2020  BP). lets check out the steps.

Preupgrade checks : Below are the prequisites that we need to check before we plan for the grid upgrade.

1) Check if /u01 has enough free space.

It is advised to have minimum of 30GB to 50GB free space in /u01. 

2) Patches to be applied before upgrading 19c.

You can check all the below doc to get the list of all the patches that you need to applied before upgrading to 19c.

Patches to apply before upgrading Oracle GI and DB to 19c or downgrading to previous release (Doc ID 2539751.1)


Check if patch 28553832 is already applied to Grid home. If its not applied, please apply it.
This patch is required on 18c grid home to upgrade it to 19c.

[oracle@node1 ~]$ . oranev
[oracle@node1 ~]$ +ASM
[oracle@node1 ~]$ $ORACLE_HOME/OPatch/opatch lsinv | grep 28553832 28540666, 28544237, 28545898, 28553832, 28558622, 28562244, 28567875

3) Minimum Grid version : For 18c DB : Oracle Database and Grid Infrastructure 18c, minimum version 18.3.0.0.180717

4) Set SGA in ASM to minimum supported value.
 It is necessary to have at least 3GB for SGA to correctly upgrade (and runs) the GI 19c version. 

sqlplus / as sysasm
SQL> show parameter sga_max_size 
SQL> show parameter sga_target 

Set them if they are not at least of 3Gb
SYS@+ASM1> alter system set sga_max_size = 3G scope=spfile sid='*';
SYS@+ASM1> alter system set sga_target = 3G scope=spfile sid=
'*';

5) Verify no active rebalance is running

SYS@+ASM1> select count(*) from gv$asm_operation;
COUNT(*)
----------
       0

6) Download the required software’s and stage them on first node of the cluster.

a) Download from Oracle Software Delivery Cloud (https://edelivery.oracle.com) 

GRID Software:  V982068-01.zip ( Oracle Database Grid Infrastructure 19.3.0.0.0" for Linux x86-64)

b) Latest 19c Grid Infrastructure Release Update (RU), July 2020 RU - Patch 31305339: GI RELEASE UPDATE 19.8.0.0.0

c) Latest OPatch release, Patch 6880880, for 19.x and all other database versions used.

- p6880880_190000_Linux-x86-64.zip
- https://updates.oracle.com/download/6880880.html

d)     Copy the software’s to /export/Sam/19c-Grid/

7) Create the required directories

As root user on Node 1
mkdir -p /u01/app/19.0.0.0/grid

If it’s a Exadata machine, you can use dcli command to create directories in all the nodes in one shot.

dcli -l root -g /root/dbs_group   mkdir -p /u01/app/19.0.0.0/grid
dcli -l root -g /root/dbs_group  chown oracle:oinstall /u01/app/19.0.0.0/grid

8) Extract the Grid software.

The 19c Grid Software is extracted directly to the Grid Home. The grid runInstaller option is no longer supported. Run the following command on the database server where the software is staged.

As Grid user on node 1

unzip -q /export/Sam/19c-Grid/V982068-01.zip -d /u01/app/19.0.0.0/grid

9) Run the Cluster verification utility.

As Grid user on node 1.

cd /u01/app/19.0.0.0/grid/
unset ORACLE_HOME ORACLE_BASE ORACLE_SID

$ ./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/18.0.0.0/grid -dest_crshome /home/19.0.0.0/grid -dest_version 19.8.0.0.0 -fixup -verbose

When issues are discovered after running Cluster Verification Utility (CVU) , a runfixup.sh script is generated in /tmp/CVU_19.0.0.0.0_grid directory. Please be aware this script makes changes to your environment. You will be given the opportunity to run this later in the section "Actions to take before executing gridSetup.sh on each database server".

10) ACFS filesystem 

If you have some mountpoint over ACFS, it is recommended to stop it “safely” before executing the gridsetup. 

This occurs because during the upgrade phase the script will try to shutdown the entire clusters in the node,  

and if the unmount of ACFS goes wrong you will receive an error (and this can be boring and stressful to handle). 

[oracle@Node1 ~]$ $ORACLE_HOME/bin/crsctl stat res -t |grep acfs
ora.datac1.acfsvol01.acfs

ONLINE  ONLINE       Node1            mounted on /acfs01,S
ONLINE  ONLINE       Node2            mounted on /acfs01,S
ONLINE  ONLINE       Node3            mounted on /acfs01,S
ONLINE  ONLINE       Node4            mounted on /acfs01,S
ora.datac1.ghchkpt.acfs
[oracle@Node1 ~]$ $ORACLE_HOME/bin/crsctl stat res -w "TYPE = ora.acfs.type" -p | grep VOLUME
AUX_VOLUMES=CANONICAL_VOLUME_DEVICE=/dev/asm/acfsvol01-311
VOLUME_DEVICE=/dev/asm/acfsvol01-311

Check and Stop the acfs files system  

/u01/app/18.0.0/grid/bin/srvctl stop filesystem -d =/dev/asm/acfsvol01-311 -n Node1
/u01/app/18.0.0/grid/bin/srvctl stop filesystem -d =/dev/asm/acfsvol01-311 -n Node2
/u01/app/18.0.0/grid/bin/srvctl stop filesystem -d =/dev/asm/acfsvol01-311 -n Node3
/u01/app/18.0.0/grid/bin/srvctl stop filesystem -d =/dev/asm/acfsvol01-311 -n Node4

11) Put the cluster in Blackout in OEM and comment out any Cron jobs which might sent alerts

12) Prepare applying 2020 July RU to the 19c Grid home on node1

This is something to take a note. It is possible to patch the new GI even before install the 19c GI . It is recommended by the way.  This is a cool update from oracle as it saves time. If you are working on 8 node cluster, it will save your effort and time for patching in all the nodes as when you update grid with gridSetup.sh with  -applyRU , it will patch the existing home first and then it invokes GUI and it copies the patched version to all the other nodes.

a) Unzip the July, 2020 RU

cd /export/Sam/19c-Grid/
unzip p31305339_190000_Linux-x86-64.zip 
ls -ltr
grid@node1 +ASM1]$ ls -l
total 232
drwxr-x---  7 grid oinstall   4096 Oct  9 17:11 31305339
drwxr-x--- 14 grid oinstall   4096 Jan 24 15:23 OPatch-ORG
-rw-rw-r--  1 grid oinstall 225499 Oct 15 13:24 PatchSearch.xml

b)  Copy the latest Opatch to new GRID home.

As Root on Node1,
cd /u01/app/19.0.0.0/grid
mv OPatch OPatch.orig.date
cp /export/Sam/19c-Grid/p6880880_190000_Linux-x86-64.zip .
unzip p6880880_190000_Linux-x86-64.zip
chown -Rf oracle:oinstall OPatch
chmod 755 OPatch

13) Grid Upgrade :
Run gridSetup.sh with  -applyRU option to apply the patch and start the grid as below.

The installation log is located at /u01/app/oraInventory/logs.  For OUI installations or execution of critical scripts it is recommend to use VNC or SCREEN to avoid problems in case connection with the server is lost.

Perform these instructions as the Grid Infrastructure software owner (which is grid in this document) to install the 19c Grid Infrastructure software and upgrade Oracle Clusterware and ASM from 12.1.0.2, 12.2.0.1, 18.1.0.0 to 19c. The upgrade begins with Oracle Clusterware and ASM running and is performed in a rolling fashion. The upgrade process manages stopping and starting Oracle Clusterware and ASM and making the new 19c Grid Infrastructure Home the active Grid Infrastructure Home.

For systems with a standby database in place this step can be performed either before, at the same time or after extracting the Grid image file on the primary system.

applyRU: This will apply the Release Update when passed as parameter BEFORE start to install the 19c GI.

- mgmtDB parameters: With 19c the MGMTDB it is not needed anymore, these parameters disable the installation and even the options are not shown during the graphical installation. 


[grid@Node1 +ASM1]$
[grid@Node1 +ASM1]$ unset ORACLE_HOME
[grid@Node1 +ASM1]$ unset ORACLE_BASE
[grid@Node1 +ASM1]$ unset ORACLE_SID
[grid@Node1 ]$
[grid@Node1 ]$ cd /u01/app/19.0.0.0/grid/
[grid@Node1 ]$
[grid@Node1 ]$ export DISPLAY=xx.xx.xxx.xx:1.0
[grid@Node1 ]$ /u01/app/19.0.0.0/grid/gridSetup.sh -applyRU /export/Sam/19c-Grid/31305339 -J-Doracle.install.mgmtDB=false -J-Doracle.install.mgmtDB.CDB=false -J Doracle.install.crs.enableRemoteGIMR=false

This step will first apply the patch and will then start the upgrade , it will invoke the GUI for Grid upgrade.

This command will install the Patch and will start the GUI for GRID upgrade.

 




Select Nodes "Do not select - Skip upgrade on unreachable nodes and " Click "Next"



Skip EM Registeration Check - Do this after installation and Click "Next"






Un-check Root Script Execution - We will do this manually. Click "Next"



    


Rootupgrade.sh needs to be executed node by node .

 

NOTE: After rootupgrade.sh completes successfully on the local node, you can run the script in parallel on other nodes except for the last node. When the script has completed successfully on all the nodes except the last node, run the script on the last node. 

Do not run rootupgrade.sh on the last node until the script has run successfully on all other nodes.

NODE1 :  Stop all the database and ACFS mounts in all the nodes.

[root@Node1]# /u01/app/19.0.0.0/grid/rootupgrade.sh
Performing root user operation.
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/19.0.0.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
..
…..
………
2020/10/03 14:00:57 CLSRSC-474: Initiating upgrade of resource types
2020/10/03 14:02:09 CLSRSC-475: Upgrade of resource types successfully initiated.
2020/10/03 14:02:24 CLSRSC-595: Executing upgrade step 18 of 18: 'PostUpgrade'.
2020/10/03 14:02:34 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

------------------------------------------------------------------------------------

NODE2 :  Stop all the database and ACFS mounts in all the nodes.

[root@Node2 ~]# /u01/app/19.0.0.0/grid/rootupgrade.sh
Performing root user operation.
The following environment variables are set as:
 ORACLE_OWNER= grid
 ORACLE_HOME=  /u01/app/19.0.0.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
..
…..
…….
2020/10/03 14:45:04 CLSRSC-476: Finishing upgrade of resource types
2020/10/03 14:45:18 CLSRSC-477: Successfully completed upgrade of resource types
2020/10/03 14:45:45 CLSRSC-595: Executing upgrade step 18 of 18: 'PostUpgrade'.

Successfully updated XAG resources.
2020/10/03 15:11:21 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

----------------------------------------------------------------------------------

NODE3 :  Stop all the database and ACFS mounts in all the nodes.

Once Node 2 completes, run rootupgrade on node 3

-------------------------------------------------------------------------------------

NODE4 :  Stop all the database and ACFS mounts in all the nodes.

Once node 3 also completes, run the rootupgrade on Node 4
-------------------------------------------------------------------------------------

Once rootupgrade.sh has been run in all the nodes, Continue with Graphical Installation and press OK.





14) Once the upgrade completes, check if the ASM is up and running in all the nodes and also Perform an extra check on the status of the Grid Infrastructure post upgrade by executing the following command from one of the compute nodes:


[root@Node1 ~]# /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all

************************************************************
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[root@Node2 ~]# node-2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
************************************************************** 
[root@Node2 ~]# /u01/app/19.0.0.0/grid/bin/crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [19.0.0.0.0]

15) Check the lspatches 

[oracle@Node101 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
31228670;REBALANCE DISK RESYNC CAUSING LOST WRITE ORA-00600 [KDSGRP1] OR ORA-01172 ON CRASH RECOVERY
31281355;Database Release Update : 19.8.0.0.200714 (31281355)
31335188;TOMCAT RELEASE UPDATE 19.0.0.0.0 (31335188)
31305087;OCW RELEASE UPDATE 19.8.0.0.0 (31305087)
31304218;ACFS RELEASE UPDATE 19.8.0.0.0 (31304218)

16) COMPATIBLITY parameter (set after 1 week)

WAIT FOR 1 WEEK FOR THIS STEP
---------------------------------------

Please wait for 1 week and then you can check and set the COMPATIBILITY parameter in all the diskgroups to allow to use new features.
. oraenv
+ASM1
SELECT name AS diskgroup, substr(compatibility,1,12) AS asm_compat,
substr(database_compatibility,1,12) AS db_compat FROM V$ASM_DISKGROUP;

DISKGROUP                      ASM_COMPAT   DB_COMPAT
------------------------------ ------------ ------------
DATAC1                         18.0.0.0.0   11.2.0.4.0
RECOC1                         18.0.0.0.0   11.2.0.4.0

ALTER DISKGROUP DATAC1 SET ATTRIBUTE 'compatible.asm' = 19.3.0.0.0' ;
ALTER DISKGROUP RECOC1 SET ATTRIBUTE 'compatible.asm' = 19.3.0.0.0' ;

SELECT name AS diskgroup, substr(compatibility,1,12) AS asm_compat,
substr(database_compatibility,1,12) AS db_compat FROM V$ASM_DISKGROUP;       DISKGROUP                      ASM_COMPAT   DB_COMPAT
       ------------------------------ ------------ ------------
       DATAC1                         19.3.0.0.0   11.2.0.4.0       RECOC1                         19.3.0.0.0   11.2.0.4.0

17) Check and  set Flex ASM Cardinality is set to "ALL"

Note :       Starting release 12.2 ASM will be configured as "Flex ASM". By default Flex ASM cardinality is set to 3. 

This means configurations with four or more database nodes in the cluster might only see ASM instances on three nodes. Nodes without an ASM instance running on it will use an ASM instance on a remote node within the cluster. Only when the cardinality is set to “ALL”, ASM will bring up the additional instances required to fulfill the cardinality setting.

on NODE 1 
. oraenv
+ASM1 
srvctl config asm

output shoud have 
ASM instance count: ALL ( You should see this) and IF Not set it.
srvctl modify asm -count ALL

 

18)  Perform Inventory update

An inventory update is required to the 19c Grid Home because in 19c the cluster node names are not registered in the inventory. Older database version tools relied on node names from inventory. 

Please run the following command on the local node when using earlier releases of database with 19c GI.

Run this step on every node.

Node1

(grid)$ /u01/app/19.0.0.0/grid/oui/bin/runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList ORACLE_HOME=/u01/app/19.0.0.0/grid "CLUSTER_NODES={node1,node2,node3,node4}" CRS=true LOCAL_NODE=local_node

Node2

(grid)$ /u01/app/19.0.0.0/grid/oui/bin/runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList ORACLE_HOME=/u01/app/19.0.0.0/grid "CLUSTER_NODES={node1,node2,node3,node4}" CRS=true LOCAL_NODE=local_node

Node3

(grid)$ /u01/app/19.0.0.0/grid/oui/bin/runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList ORACLE_HOME=/u01/app/19.0.0.0/grid "CLUSTER_NODES={node1,node2,node3,node4}" CRS=true LOCAL_NODE=local_node

Node4

(grid)$ /u01/app/19.0.0.0/grid/oui/bin/runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList ORACLE_HOME=/u01/app/19.0.0.0/grid "CLUSTER_NODES={node1,node2,node3,node4}" CRS=true LOCAL_NODE=local_node

19)  Disable Diagsnap for Exadata

Due to Bugs 24900613, 25785073 and 25810099, Diagsnap should be disabled for Exadata. 

(grid)$ cd /u01/app/19.0.0.0/grid/bin

(grid)$ ./oclumon manage -disable diagsnap

 

20) Edit the oratab to include the 19c for GRID HOME 

21) Remove the blackouts

22) Start the listener and all the databases


**********************************************************                       -------- END OF UPGRADE --------------**********************************************************

Hope this document helps you. 


In the next post, I am going to show how to downgrade the above upgraded grid back to 18c. 



Thanks

Sam


Thursday, March 18, 2021

Fixing "ORA-02000: missing IDENTIFIED" issue while trying to create TDE key for a 12.1.0.2 PDB database

 


Addressing "ORA-02000: missing IDENTIFIED keyword" in 12.1.0.2 with July 202 BP:


Today I was working on enabling TDE in one of our 12.1.0.2 Multitenant database. when I was trying to create TDE key for the PDB, TLEDPDB database, I was getting "ORA-02000: missing IDENTIFIED keyword".

Looks like we have a bug for this issue and we have to apply one-off patch(27487279) to fix the issue.

My Environment: 

Hostname         :   vmrac-node1 
                       vmrac-node2

CDB database    : SAMSACDB
PDB database    : TLEDPDB
Patch level         : 2020 July BP.

Below is the issue.

[oracle@vmrac-node1 admin]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.2.0 Production on Mon Mar 18 09:19:17 2021
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL>set lines 200

select * from v$encryption_wallet;

WRL_TYPE            WRL_PARAMETER      STATUS          WALLET_TYPE          WALLET_OR FULLY_BAC     CON_ID
------------------- ------------------------- ------------------------ ------------------- --------- --------- ----------
ASM                 +DATAC1/samsacdb/WALLET/  OPEN   AUTOLOGIN  SINGLE    NO                 0

SQL> show pdbs

    CON_ID CON_NAME                  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 TLEDPDB                         MOUNTED

SQL> alter pluggable database TLEDPDB open instances=all;
Pluggable database altered.

SQL> alter pluggable database TLEDPDB save state;   
Pluggable database altered.

(FYI....Above command is to save the state so that when we bounce the CDB database, this PDB database will open in READ WRITE mode) 

Connect to the PDB and try to create master KEY for the PDB database.

SQL> alter session set container=TLEDPDB;
Session altered.

SQL>set lines 200
select * from v$encryption_wallet;

WRL_TYPE            WRL_PARAMETER         STATUS         WALLET_TYPE          WALLET_OR FULLY_BAC     CON_ID
------------------- ------------------------- ------------------------ ------------------- --------- --------- ----------
ASM     +DATAC1/samsacdb/WALLET/  OPEN_NO_MASTER_KEY  AUTOLOGIN    SINGLE    UNDEFINED      0

SQL> ADMINISTER KEY MANAGEMENT SET KEY FORCE KEYSTORE IDENTIFIED BY Welcome1 WITH BACKUP;

ADMINISTER KEY MANAGEMENT SET KEY FORCE KEYSTORE IDENTIFIED BY Welcome1 WITH BACKUP
                                  *

ERROR at line 1:
ORA-02000: missing IDENTIFIED keyword

Tried again to see if there is any syntax error, but everything was good.

SQL> ADMINISTER KEY MANAGEMENT SET KEY FORCE KEYSTORE IDENTIFIED BY Welcome1 WITH BACKUP;
ADMINISTER KEY MANAGEMENT SET KEY FORCE KEYSTORE IDENTIFIED BY Welcome1 WITH BACKUP
                                 *
ERROR at line 1:
ORA-02000: missing IDENTIFIED keyword
==================================================
To Fix the issue, we need to apply the below patch and retry to create the master key in PDB. Please see below.

Patch : p27487279_12102200714ProactiveBP_Linux-x86-64

I downloaded and applied this patch in both eh nodes as below.
==================================================

[oracle@vmrac-node1 27487279]$ opatch apply

Oracle Interim Patch Installer version 12.2.0.1.24
Copyright (c) 2021, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/oracle/product/12.1.0.2/db_1
Central Inventory : /u01/app/oraInventory
from           : /u01/app/oracle/product/12.1.0.2/db_1/oraInst.loc
OPatch version    : 12.2.0.1.24
OUI version       : 12.1.0.2.0
Log file location : /u01/app/oracle/product/12.1.0.2/db_1/cfgtoollogs/opatch/opatch2021-03-15_14-32-21PM_1.log

Verifying environment and performing prerequisite checks...

OPatch continues with these patches:   27487279
Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/oracle/product/12.1.0.2/db_1')
Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch '27487279' to OH '/u01/app/oracle/product/12.1.0.2/db_1'
Patching component oracle.rdbms, 12.1.0.2.0...
Patch 27487279 successfully applied.
Log file location: /u01/app/oracle/product/12.1.0.2/db_1/cfgtoollogs/opatch/opatch2021-03-15_14-32-21PM_1.log

OPatch succeeded.

[oracle@vmrac-node1 27487279]$ pwd

==================================================

Now If I try to create master key for the PDB, it works :)

[oracle@vmrac-node1 12.1]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Mon Mar 18 09:44:41 2021
Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,

Advanced Analytics and Real Application Testing options

SQL> show pdbs

   CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                     READ ONLY  NO
         3 TLEDPDB                       READ WRITE NO

SQL> alter session set container=TLEDPDB;
Session altered.

SQL> ADMINISTER KEY MANAGEMENT SET KEY FORCE KEYSTORE IDENTIFIED BY Welcome1 WITH BACKUP;
keystore altered.

SQL>

It worked now without any issues.


==================================================

Thanks
Sambaiah Sammeta


Monday, March 8, 2021

 

Adding Pluggable Databases to an existing Multitenant Data Guard environment.


We can use below document to add a PDB to a RAC Container database which has RAC standby database.

1. Check the source database information.

DATABASE_HOST                  DB_NAME           DATABASE_ROLE    OPEN_MODE           STARTUP_TIME

------------------------------ ---------------- ---------------- -------------------- --------------------
virtual-19crac1.localdomain     buzz18c1         PRIMARY          READ WRITE           04-FEB-21
virtual-19crac2.localdomain     buzz18c2         PRIMARY          READ WRITE           04-FEB-21

2. Note down the GUID of the source database incase if we need it check something in standbydatabase as all the datafiles in standby will be created in a directory created with GUID

SQL> select guid from v$containers;
GUID
--------------------------------
BA3968236D0B4A71E0535018A8C08092

3. Shutdown the non-CDB primary cleanly and then restart only one instance in READ ONLY mode

[oracle@virtual-19crac1 ~]$ srvctl stop  database -d buzz18c
[oracle@virtual-19crac1 ~]$ srvctl status database -d buzz18c
Instance buzz18c1 is not running on node virtual-19crac1
Instance buzz18c2 is not running on node virtual-19crac2

[oracle@virtual-19crac1 ~]$ srvctl start instance -d buzz18c -n virtual-19crac1 -o "READ ONLY"

[oracle@virtual-19crac1 ~]$ srvctl status database -d buzz18c
Instance buzz18c1 is running on node virtual-19crac1
Instance buzz18c2 is not running on node virtual-19crac2

[oracle@virtual-19crac1 ~]$sqlplus / as sysdba

DATABASE_HOST                  DB_NAME           DATABASE_ROLE    OPEN_MODE           STARTUP_TIME
------------------------------ ---------------- ---------------- -------------------- --------------------
virtual-19crac1.localdomain     buzz18c1         PRIMARY          READ ONLY            04-FEB-21


4. In the source non-cdb database, creat a manifest file to plug in.

mkidr -p /u01/software/migration

SQL> exec dbms_pdb.describe('/u01/software/migration/buzz18c.xml');

PL/SQL procedure successfully completed

5. Shutdown the source non-cdb database

[oracle@virtual-19crac1 ~]$ srvctl stop  database -d buzz18c
[oracle@virtual-19crac1 ~]$ srvctl status database -d buzz18c
Instance buzz18c1 is not running on node virtual-19crac1
Instance buzz18c2 is not running on node virtual-19crac2

6.  Create the PDB in the target CDB database as below using "Standbys=NONE" options. 

SQ> create pluggable database buzz18c USING '/u01/software/migration/buzz18c.xml' tempfile reuse NOCOPY  standbys=NONE;

Pluggable database created.

7.  Check the status of the PDB 

set lines 200
col open_mode for a15
col name for a10
SELECT name,open_mode,recovery_status FROM v$pdbs;

NAME           OPEN_MODE       RECOVERY
----------          -----------------        --------- --------
PDB$SEED   MOUNTED           ENABLED
BUZZ18C       MOUNTED          ENABLED

8. Open the database in upgrade mode and check the status again.

ALTER pluggable DATABASE BUZZ18C   OPEN upgrade;
Pluggable database altered.

SQL> SELECT name,open_mode,recovery_status FROM v$pdbs;

NAME           OPEN_MODE       RECOVERY
-------------    -----------------    ----------------------
PDB$SEED    MOUNTED         ENABLED
BUZZ18C        MIGRATE          ENABLED


9.  AS you can see that the PDB was plugged and it is in MIGRATE mode in Primary CDB database. Lets check its status in CDB Standby database 

DATABASE_HOST                  DB_NAME           DATABASE_ROLE    OPEN_MODE           STARTUP_TIME
------------------------------ ---------------- ---------------- -------------------- --------------------
virtualdr-19crac1.localdomain  test19cstdby1    PHYSICAL STANDBY MOUNTED              03-FEB-21
virtualdr-19crac2.localdomain  test19cstdby2    PHYSICAL STANDBY MOUNTED              03-FEB-21

SQL>  set lines 200
col open_mode for a15
col name for a10
SELECT name,open_mode,recovery_status FROM v$pdbs;

NAME       OPEN_MODE       RECOVERY
---------- --------------- --------
PDB$SEED   MOUNTED         ENABLED
BUZZ18C    MOUNTED         DISABLED

Note: As you can see that in standby database of CDB , the status of newly created PDB is MOUNT and the RECOVERY status is DISABLED. 

If You check the alert log of standby database, you can see that the PDB datafiles are stored as 'UNAMED' . Check below.

Alert log output
----------------------

Resize operation completed for file# 3, old size 1249280K, new size 1259520K
2021-02-04T10:08:34.296502-06:00
MRP0 (PID:1903): Media Recovery Log +DATA/TEST19CSTDBY/ARCHIVELOG/2021_02_04/thread_1_seq_255.1633.1063616535
Recovery created pluggable database BUZZ18C
BUZZ18C(6):File #37 added to control file as 'UNNAMED00037'. Originally created as:
BUZZ18C(6):'+DATA/BUZZ18C/DATAFILE/system.1969.1063291795'
BUZZ18C(6):because the pluggable database was created with nostandby
BUZZ18C(6):or the tablespace belonging to the pluggable database is
BUZZ18C(6):offline.
BUZZ18C(6):File #38 added to control file as 'UNNAMED00038'. Originally created as:
BUZZ18C(6):'+DATA/BUZZ18C/DATAFILE/sysaux.1762.1063291831'
BUZZ18C(6):because the pluggable database was created with nostandby
BUZZ18C(6):or the tablespace belonging to the pluggable database is
BUZZ18C(6):offline.
BUZZ18C(6):File #39 added to control file as 'UNNAMED00039'. Originally created as:
BUZZ18C(6):'+DATA/BUZZ18C/DATAFILE/undotbs1.1756.1063291857'
BUZZ18C(6):because the pluggable database was created with nostandby
BUZZ18C(6):or the tablespace belonging to the pluggable database is
BUZZ18C(6):offline.


10) At this stage, lets focus on PRIMARY PDB database . Since the PDB database has been plugged , lets continue with the steps to convert the non-CDB into a PDB by 

connecting to the PDB and running $ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql.

 10a) Connect to Primary CDB database

DATABASE_HOST                  DB_NAME           DATABASE_ROLE    OPEN_MODE           STARTUP_TIME
------------------------------ ---------------- ---------------- -------------------- --------------------
virtual-19crac1.localdomain     test19c1         PRIMARY          READ WRITE           03-FEB-21
virtual-19crac2.localdomain     test19c2         PRIMARY          READ WRITE           03-FEB-21

 10b) Connect to PDB database as belowS

 CON_ID CON_NAME           OPEN MODE  RESTRICTED
--------- ------------------------    ------ ---------- ----------
2 PDB$SEED                           MOUNTED
3 BUZZ18C                             MIGRATE             YES

SQL> alter session set container=BUZZ18C;
Session altered.

 10c) Run the below script to convert non-cdb to pdb.

SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql

Generally this script wont fail, but in my case, I had one parameter 'STRING_MAX_SIZE' set in CDB level and because of this parameter my command failed at the last step. Below is the error message.

Last few lines from above command
---------------------------------
10:19:47 SQL> alter pluggable database "&pdbname" open restricted;
alter pluggable database "BUZZ18C" open restricted
*
ERROR at line 1:
ORA-14696: MAX_STRING_SIZE migration is incomplete for pluggable database
BUZZ18C
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.9.0.0.0
[oracle@virtual-19crac1 migration]$ sqlplus / as sysdba

To address this issue,  PLEASE FOLLOW below sub steps ONLY if you got the above error. If you didnt get the above error, you can go the step 11.

So I have start the CDB primary database in upgrade mode and then open this PDB in upgrade mode and run the below script.

10c1)  Connect to primary database CDB and set cluster_database to false and bounce the database.

SQL > alter system set cluster_database=false scope=spfile sid='*';
srvctl stop database -d test19c

10c2) start the CDB primary database in upgrade mode

SQL> startup upgrade

10c) Open the new PDB in upgrade mode

SQL>  alter pluggable database BUZZ18C open upgrade;

10c) Connect to the new PDB and run below script related to MAX_STRING_SIZE

SQL> alter session set container=BUZZ18C;
SQL> show con_name
CON_NAME
------------------------------
BUZZ18C

@?/rdbms/admin/utl32k.sql

10d) set cluster_database back to TRUE and shutdown the CDB primary

SQL> alter system set cluster_database=true scope=spfile sid='*';
SQL> shut immediate;

10e) Start the CDB primary database using srvctl 

[oracle@virtual-19crac1 migration]$ srvctl start database -d test19c
[oracle@virtual-19crac1 migration]$ srvctl status database -d test19c
Instance test19c1 is running on node virtual-19crac1
Instance test19c2 is running on node virtual-19crac2

10d) open the PDB in upgrade and re-run the noncdb_to_pdb scrip

SQL> ALTER pluggable DATABASE BUZZ18C   OPEN upgrade;
Pluggable database altered.

SQL> SELECT name,open_mode,recovery_status FROM v$pdbs;
NAME       OPEN_MODE       RECOVERY
---------- --------------- --------
PDB$SEED     MOUNTED         ENABLED
BUZZ18C       MIGRATE         ENABLED

SQ>@$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql

11) Connect the to Primary CDB and open the new PDB database.SQL>

show pdbs
 CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
          2 PDB$SEED                      MOUNTED
            3 BUZZ18C                        MOUNTED

SQL> alter pluggable database BUZZ18c OPEN;

Pluggable database altered.


SQL>show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED

---------- ------------------------------ ---------- ----------

         2 PDB$SEED                       MOUNTED
          3 BUZZ18C                        READ WRITE NO

12) In the Standby Container Restore PDB from Primary as below. 

run

{

set newname for pluggable database BUZZ18C to new;

restore pluggable database BUZZ18C from service test19c;

}

Output ---------------------------------------------------------------------

[oracle@chopstdby-19crac1 ~]$ rman target /
Recovery Manager: Release 19.0.0.0.0 - Production on Thu Feb 4 11:11:50 2021
Version 19.9.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.
connected to target database: TEST19C (DBID=2437448537, not open)
RMAN> run
{
set newname for pluggable database BUZZ18C to new;
restore pluggable database BUZZ18C from service test19c;

}

2> 3> 4> 5>

executing command: SET NEWNAME
Starting restore at 04-FEB-21
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=71 instance=test19cstdby1 device type=DISK
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service test19c
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00037 to +DATA
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service test19c
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00038 to +DATA
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service test19c
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00039 to +DATA
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service test19c
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00040 to +DATA
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service test19c
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00041 to +DATA
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
Finished restore at 04-FEB-21
RMAN>

---------------------------------------------------------------------

13)  Stop the log apply process on the standby database as below.


DGMGRL> edit database test19cstdby set state='APPLY-OFF';
Succeeded.

DGMGRL> show database test19cstdby
Database - test19cstdby
   Role:               PHYSICAL STANDBY
   Intended State:     APPLY-OFF
   Transport Lag:      23 minutes 48 seconds (computed 8 seconds ago)
   Apply Lag:          23 minutes 47 seconds (computed 8 seconds ago)
   Average Apply Rate: (unknown)
   Real Time Query:    OFF
   Instance(s):
     test19cstdby1 (apply instance)
     test19cstdby2
   Database Warning(s):
     ORA-16855: transport lag has exceeded specified threshold
Database Status:
WARNING


14) Since the replication stopped now, lets switch the standby pdb datafiles as below.

RMAN> switch pluggable database BUZZ18C to copy;

[oracle@chopstdby-19crac1 ~]$ rman target /
Recovery Manager: Release 19.0.0.0.0 - Production on Thu Feb 4 11:18:34 2021
Version 19.9.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.
connected to target database: TEST19C (DBID=2437448537, not open)

RMAN> switch pluggable database BUZZ18C to copy;
using target database control file instead of recovery catalog
datafile 37 switched to datafile copy "+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/system.1316.1063624317"
datafile 38 switched to datafile copy "+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/sysaux.1312.1063624331"
datafile 39 switched to datafile copy "+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/undotbs1.1313.1063624339"
datafile 40 switched to datafile copy "+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/users.1306.1063624353"
datafile 41 switched to datafile copy "+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/undotbs2.1304.1063624355"
RMAN>


14) Enable the PDB recovery on the Standby Container

SQL> ALTER SESSION SET container=BUZZ18C;

Session altered.

SQL> ALTER pluggable DATABASE enable recovery;

Pluggable database altered.

15)  Connect to PDB database in Standby CDB and create a dynamic script to move the datafiles to ONLINE.

 15a) Generate  dynamic SQL. SQL> ALTER SESSION SET container=BUZZ18C;
Session altered.

SQL> set lines 200
scol name for a99
select name,status from v$datafile
select 'ALTER DATABASE datafile ''' || name || ''' online ;' from v$datafile;
Output
-------------------------------------------------------------------------------------------------
NAME                                                                                                STATUS
--------------------------------------------------------------------------------------------------- ---------------

+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/system.1316.1063624317                 RECOVER

+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/sysaux.1312.1063624331                 RECOVER

+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/undotbs1.1313.1063624339               RECOVER

+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/users.1306.1063624353                  RECOVER

+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/undotbs2.1304.1063624355               RECOVER


ALTER DATABASE datafile '+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/system.1316.1063624317' online ;

ALTER DATABASE datafile '+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/sysaux.1312.1063624331' online ;

ALTER DATABASE datafile '+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/undotbs1.1313.1063624339' online ;

ALTER DATABASE datafile '+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/users.1306.1063624353' online ;

ALTER DATABASE datafile '+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/undotbs2.1304.1063624355' online ;

15b) Now run the above ONLINE commands in PDB.

SQL> ALTER DATABASE datafile '+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/system.1316.1063624317' online ;
Database altered.

ALTER DATABASE datafile '+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/sysaux.1312.1063624331' online ;

Database altered.

 ALTER DATABASE datafile '+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/undotbs1.1313.1063624339' online ;

Database altered.

 ALTER DATABASE datafile '+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/users.1306.1063624353' online ;
Database altered.

 ALTER DATABASE datafile '+DATA/TEST19CSTDBY/BA3968236D0B4A71E0535018A8C08092/DATAFILE/undotbs2.1304.1063624355' online ;

Database altered.

16) Now connect to the Standby container and restart the Apply Process as below.

DGMGRL>  edit database test19cstdby set state='APPLY-ON';

Succeeded.

    DGMGRL> show database test19cstdby;

    Database - test19c
    stdby   Role:               PHYSICAL STANDBY
     Intended State:     APPLY-ON
    Transport Lag:      35 minutes 53 seconds (computed 24 seconds ago)
  Apply Lag:          35 minutes 52 seconds (computed 24 seconds ago)
  Average Apply Rate: 23.00 KByte/s
  Real Time Query:    OFF
  Instance(s):
    test19cstdby1 (apply instance)
    test19cstdby2
  Database Warning(s):
    ORA-16853: apply lag has exceeded specified threshold
    ORA-16855: transport lag has exceeded specified threshold
Database Status:
WARNING

17 ) Now stop the standby CDB database and start it in READ ONLY mode  and check if its receiving the logs and if its in SYNC with primary database.


Hope this doc helps you.


Thanks
Sam

 



Sunday, March 7, 2021

Setting up TDE in RAC database with Physical standby database in place

 Implementing TDE in RAC with Standby database configuration


============= Start of Step One, Steps for Primary database ====================

1. Add the keystore location in sqlnet.ora

on Node 1 of Primary RAC.

cp /u01/app/oracle/product/19c/db_1/network/admin/sqlnet.ora /u01/app/oracle/product/19c/db_1/network/admin/sqlnet.ora.backup

vi /u01/app/oracle/product/19c/db_1/network/admin/sqlnet.ora

ENCRYPTION_WALLET_LOCATION=
(SOURCE=
(METHOD=FILE)
    (METHOD_DATA=
      (DIRECTORY=+DATA/$ORACLE_UNQNAME/WALLET)
    )
)

Repeat the above in all the nodes of the cluster

2)  Create a keystore in Primary database as below.

If its a container database, connect to the root and create the keystore as below.

SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '+DATA/BASICPR/WALLET/' identified by Welcome1;

keystore altered.

SQL> select * from v$encryption_wallet;

WRL_TYPE     WRL_PARAMETER       STATUS      WALLET_TYPE   WALLET_OR FULLY_BAC   CON_ID
------------------- ------------------------- ------------------------------ -------------------- --------- --------- --------ASM     +DATA/basicpr/WALLET/    CLOSED      UNKNOWN   SINGLE    UNDEFINED        1

SQL> 

3. Open the keystore In primary database.

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY Welcome1 ;

keystore altered.

SQL> select * from v$encryption_wallet;

WRL_TYPE     WRL_PARAMETER       STATUS      WALLET_TYPE   WALLET_OR FULLY_BAC   CON_ID

------------------- ------------------------- ------------------------------ -------------------- --------- --------- --------ASM     +DATA/basicpr/WALLET/    OPEN_NO_MASTER_KEY       PASSWORD   SINGLE    UNDEFINED        1

4. To avoid opening the Keystore manually everytime, we can enable auto-login so that you dont have enter the password when we we bounce the database.

SQL> ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE FROM KEYSTORE '+DATA/BASICPR/WALLET'  identified by Welcome1;

keystore altered.

SQL> select * from v$encryption_wallet;

WRL_TYPE     WRL_PARAMETER       STATUS      WALLET_TYPE   WALLET_OR FULLY_BAC   CON_ID
------------------- ------------------------- ------------------------------ -------------------- --------- --------- --------ASM     +DATA/basicpr/WALLET/    OPEN_NO_MASTER_KEY       AUTOLOGIN SINGLE    UNDEFINED        1

5. Create the master key for the primary database.

Current Setting

SQL> ;select * from v$encryption_wallet;

WRL_TYPE     WRL_PARAMETER       STATUS      WALLET_TYPE   WALLET_OR FULLY_BAC   CON_ID
----------------- ------------------------- ------------------------------ -------------------- --------- --------- ----------
ASM     +DATA/basicpr/WALLET/    OPEN_NO_MASTER_KEY      AUTOLOGIN   SINGLE    NO        1

At this step you can observe that the status column value is OPEN_NO_MASTER_KEY when you query V$ENCRYPTION_WALLET. This is because the master key is not yet created. Below is the command to create a master key.

Now you can create the Master key using " ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY Welcome1 WITH BACKUP";


SQL>  ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY Welcome1 WITH BACKUP;

keystore altered.

SQL> select * from v$encryption_wallet;

WRL_TYPE     WRL_PARAMETER       STATUS      WALLET_TYPE   WALLET_OR FULLY_BAC   CON_ID
------------------- ------------------------- ------------------------------ -------------------- --------- --------- --------
ASM     +DATA/basicpr/WALLET/    OPEN      AUTOLOGIN   SINGLE    NO        1

This completes the TDE setup on Primary .

============= END of Step One, Steps for PRIMARY database ====================

============= Start of Step One, Steps for STANDBY database ====================

Now For Standby database

1. . Add the keystore location in sqlnet.ora

on Node 1 of Standby RAC.

cp /u01/app/oracle/product/19c/db_1/network/admin/sqlnet.ora /u01/app/oracle/product/19c/db_1/network/admin/sqlnet.ora.backup

vi /u01/app/oracle/product/19c/db_1/network/admin/sqlnet.ora

ENCRYPTION_WALLET_LOCATION=
  (SOURCE=
  (METHOD=FILE)
    (METHOD_DATA=
      (DIRECTORY=+DATA/$ORACLE_UNQNAME/WALLET)
    )
)

Repeat the above in all the nodes of the cluster

2. On Primary database, Create a temporary keystore in Filesystem and merge the existing ASM keystore into this filesystem keystore so that we can copy the key from filesystem kestore to standby database.

mkdir -p /u01/software/basicpr_mig/wallet_bakp/

SQL>  ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/u01/software/basicpr_mig/wallet_bakp/' identified by Welcome1;

keystore altered.

Merge both keystore in ASM and created in file system(above) .

 SQL>  ADMINISTER KEY MANAGEMENT MERGE KEYSTORE '+DATA/basicpr/WALLET/'  identified by Welcome1 INTO EXISTING KEYSTORE '/u01/software/basicpr_mig/wallet_bakp/' IDENTIFIED BY  Welcome1 WITH BACKUP;

keystore altered.

3) On Standby databae create below directores and Copy the files from Primary to this directory on stamdby.


On Standby

[oracle@virtual-dr-19crac1 dbs]$ mkdir -p /u01/software/basicpr_mig/wallet_bakp/
[oracle@virtual-dr-19crac1 dbs]$ cd /u01/software/basicpr_mig/wallet_bakp/
[oracle@virtual-dr-19crac1 wallet_bakp]$ pwd

On Primary 

[oracle@chopvm-19crac1 dbs]$ cd /u01/software/basicpr_mig/wallet_bakp/
[oracle@chopvm-19crac1 wallet_bakp]$ pwd
/u01/software/basicpr_mig/wallet_bakp
[oracle@chopvm-19crac1 wallet_bakp]$ ls -ltr
total 8
-rw------- 1 oracle oinstall 2555 Mar  7 10:39 ewallet_2021030716392070.p12
-rw------- 1 oracle oinstall 3995 Mar  7 10:39 ewallet.p12

[oracle@chopvm-19crac1 wallet_bakp]$ scp * oracle@virtual-dr-19crac1:/u01/software/basicpr_mig/wallet_bakp/

oracle@virtual-dr-19crac1's password:
ewallet_2021030716392070.p12                                                                                                                             100% 2555     3.2MB/s   00:00

ewallet.p12                                                                                                                                              100% 3995     7.1MB/s   00:00


4) On standby database server, we should now copy the contents from filesystem to asm directory.

Now we are going to move the keystore to ASM on standby server

 4a) On Standby database, now create a keystore in ASM.

SQL>  ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '+DATA/basicdr/WALLET/' IDENTIFIED BY  Welcome1;

keystore altered.

        SQL> select * from v$encryption_wallet;

WRL_TYPE            WRL_PARAMETER             STATUS                         WALLET_TYPE          WALLET_OR KEYSTORE FULLY_BAC     CON_ID
------------------- ------------------------- ------------------------------ -------------------- --------- -------- ------ASM                 +DATA/basicdr/WALLET/     OPEN_NO_MASTER_KEY             PASSWORD             SINGLE    NONE     UNDEFINED          0


 4b) Merge both keystore in Filesystem where we copied the keys with the ASM keystore 

 SQL> ADMINISTER KEY MANAGEMENT MERGE KEYSTORE '/u01/software/basicpr_mig/wallet_bakp/' identified by Welcome1 INTO EXISTING KEYSTORE '+DATA/basicdr/WALLET/' identified by Welcome1 WITH BACKUP;

keystore altered.

 SQL>      select * from v$encryption_wallet;


WRL_TYPE            WRL_PARAMETER             STATUS                         WALLET_TYPE          WALLET_OR KEYSTORE FULLY_BAC     CON_ID
------------------- ------------------------- ------------------------------ -------------------- --------- -------- --------- ASM                 +DATA/basicdr/WALLET/     CLOSED                         UNKNOWN              SINGLE    NONE     UNDEFINED          0


 4c) Open the Keystore in the standby database.

SQL>  ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY  Welcome1;

keystore altered.

4d) You can now start the recovery and see if the standby database is able to catch up with primary.

SQL> ALTER DATABASE RECOVER  managed standby database using current logfile disconnect;

4e) To avoid opening the Keystore manually everytime, we can enable auto-login so that you dont have enter the password when we we bounce the database.

SQL> ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE FROM KEYSTORE '+DATA/basicdr/WALLET/' identified by Welcome1;

keystore altered.

SQL>   select * from v$encryption_wallet;

WRL_TYPE            WRL_PARAMETER             STATUS                         WALLET_TYPE          WALLET_OR KEYSTORE FULLY_BAC     CON_ID
-------- ------------------------- ------------------------------ -------------------- --------- -------- --------- ---------ASM                 +DATA/basicdr/WALLET/     OPEN                           PASSWORD             SINGLE    NONE     NO                 0


4f) Bounce the database so that keystore autologin can 

[oracle@virtual-dr-19crac1 wallet_bakp]$ srvctl stop database -d basicdr
[oracle@virtual-dr-19crac1 wallet_bakp]$ srvctl start database -d basicdr
[oracle@virtual-dr-19crac1 wallet_bakp]$ sqlplus / as sysdba

SQL>  select * from v$encryption_wallet;


WRL_TYPE            WRL_PARAMETER             STATUS                         WALLET_TYPE          WALLET_OR KEYSTORE FULLY_BAC     CON_ID
------- ------------------------- ------------------------------ -------------------- --------- -------- --------- ----------
ASM                 +DATA/basicdr/WALLET/     OPEN                           AUTOLOGIN            SINGLE    NONE     NO                 0


4g) Finally start the recovery process again and check to see if the MRP proccess is applying the redo data.

ALTER DATABASE RECOVER  managed standby database using current logfile disconnect;

select inst_id,process, status, thread#, sequence#, block#, blocks from gv$managed_standby where process='MRP0';


select a.thread#, (select max (sequence#) from v$archived_log where archived='YES' and thread#=a.thread#) archived,max(a.sequence#) applied, (select max(sequence#) from v$archived_log where archived='YES' and thread#=a.thread#)-max(a.sequence#)gap from v$archived_log a where a.applied='YES' group by a.thread# order by thread#;

============= END of Step One, Steps for STANDBY database ====================


Test Case:  In PRIMARY database, Lets create a ENCRYPTED tablespaces and create a table and insert a row and switch the logfile and then on STANDBY database, check to see if the table comes by and if we can see the data in that encrypted table.

SQL> CREATE TABLESPACE ENCRYPTED_TS DATAFILE '+DATA' SIZE 100M AUTOEXTEND ON  maxsize 1G ENCRYPTION USING 'AES256' DEFAULT STORAGE(ENCRYPT);

Tablespace created.

SQL> select tablespace_name,encrypted from dba_tablespaces;

TABLESPACE_NAME        ENCRYPTED

--------------------------    ----------------------- ---

SYSTEM        NO
SYSAUX        NO
UNDOTBS1        NO
TEMP        NO
USERS        NO
UNDOTBS2        NO
TBS_NAME        YES

7 rows selected.

SQL> create table encrypt (name varchar2(100)) tablespace encrypted_ts;
Table created.

SQL> insert into encrypt values ('This tablepace is Encrypted');
1 row created.

SQL> commit;
Commit complete.

Switch couple of logfiles.

SQL> alter system switch all logfiles;

System altered.
SQL> alter system switch all logfiles;
System altered.

On now on standby check if the table is replicated and if you can access it.

SQ> select * from encrypt 

NAME
-------------------------------------------This tablespaces is Encrypted

Perfect!! TDE is enabled on both the primary and standby databases.

Hope this doc helps you.


Thanks
Sam!

======================================================================