Search This Blog

Thursday, May 13, 2021

Cascaded Standby database configuration

 


 

Lets see how to build a cascaded standby database .

 

Our Existing Primary and Physical standby database.

 

Primary database.
=================

SQL> select name,db_unique_name,open_mode,database_role from gv$database;

NAME            DB_UNIQUE_NAME      OPEN_MODE       DATABASE_ROLE
--------------   ----------------------------   --------------------   ------------------------
DELL12PR    dell12pr                               READ WRITE         PRIMARY
DELL12PR    dell12pr                               READ WRITE         PRIMARY

 

First Standby database.
=========================

DATABASE_HOST      DB_NAME           DATABASE_ROLE    OPEN_MODE         STARTUP_TIME
------------------------------ ---------------- ---------------- -------------------- --------------------
vmachine-dr2  dell12dr2        PHYSICAL STANDBY READ ONLY WITH APPLY 01-MAY-21
vmachine-dr1  dell12dr1        PHYSICAL STANDBY READ ONLY WITH APPLY 01-MAY-21

We will now need to build a cascaded standby database which should receives the logfiles from the first standby database.

 

Second Standby database (Expected).
=================================

 

SQL>  select name,db_unique_name,open_mode,database_role from gv$database;

NAME      DB_UNIQUE_NAME                 OPEN_MODE            DATABASE_ROLE
--------- ------------------------------ -------------------- ----------------
DELL12PR  xeon12dr                       READ ONLY            PHYSICAL STANDBY
DELL12PR  xeon12dr                       READ ONLY            PHYSICAL STANDBY

Dataguard parameters In Primary database.

NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

dg_broker_config_file1               string      +DATAC1/dell12pr/dr1dell12pr.dat
dg_broker_config_file2               string      +DATAC1/dell12pr/dr2dell12pr.dat
log_archive_config                   string      dg_config=(dell12pr,dell12dr)

SQL> alter system set log_archive_config='dg_config=(dell12pr,dell12dr,acer12dr)' scope=both sid='*';
System altered.

 

SQL> show parameter config

 

NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

dg_broker_config_file1               string      +DATAC1/dell12pr/dr1dell12pr.dat
dg_broker_config_file2               string      +DATAC1/dell12pr/dr2dell12pr.dat
log_archive_config                   string      dg_config=(dell12pr,dell12dr,acer12dr)

 

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_3='DEFER' SCOPE=BOTH sid='*';
System altered.

Add the TNS entries for the second standby database to the both the nodes of the server.

ACER12DR_standby =
  (DESCRIPTION =
    (ADDRESS_LIST =      (ADDRESS = (PROTOCOL = TCP)(HOST = vmachine-
dr1.localdomain)(PORT = 1521))
      (ADDRESS = (PROTOCOL = TCP)(HOST = vmachine-dr2.localdomain)(PORT = 1521))    )
    (CONNECT_DATA =
      (SERVICE_NAME = acer12dr)
    )
  )

 

ACER12DR =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = vmachine-dr1.localdomain)(PORT
 1521))
      (ADDRESS = (PROTOCOL = TCP)(HOST = vmachine-dr2.localdomain)(PORT
 1521))    )
    (CONNECT_DATA =
     (SERVICE_NAME = acer12dr)

    )

  )

 

Init file for the cascaded standby database.

 

[oracle@vmachine-dr1 dbs]$ cat initacer12dr1.ora

*.audit_file_dest='/u01/app/oracle/admin/acer12dr/adump'

*.audit_trail='db'

*.cluster_database=true

*.compatible='12.1.0.2.0'

*.db_block_size=8192

*.db_create_file_dest='+DATAC1'

*.db_domain=''

*.db_name='dell12pr'

*.db_unique_name='acer12dr'

*.db_recovery_file_dest='+DATAC1'

*.db_recovery_file_dest_size=4785m

*.diagnostic_dest='/u01/app/oracle'

*.dispatchers='(PROTOCOL=TCP) (SERVICE=acer12drXDB)'

acer12dr2.instance_number=2

acer12dr1.instance_number=1

*.log_archive_format='%t_%s_%r.dbf'

*.open_cursors=300

*.pga_aggregate_target=1140m

*.processes=300

*.remote_login_passwordfile='exclusive'

*.sga_target=3420m

acer12dr2.thread=2

acer12dr1.thread=1

acer12dr1.undo_tablespace='UNDOTBS1'

acer12dr2.undo_tablespace='UNDOTBS2'

 

=============================================================

 

Create the adump directory in both the nodes.

mkdir -p /u01/app/oracle/admin/acer12dr/adump

 

 

=============================================================

Create the password file for the second standby database or copy the first standbt database password file as below

[oracle@vmachine-dr1 dbs]$ cp orapwdell12dr1 orapwacer12dr1

[oracle@vmachine-dr1 dbs]$ pwd

/u01/app/oracle/product/12c/db_1/dbs

 

===========================================================================================================

 

Build the Second standby database using rman duplicate

 

[oracle@vmachine-dr1 ~]$ rman target sys@dell12dr_standby auxiliary sys@acer12dr_standby

 

Recovery Manager: Release 12.1.0.2.0 - Production on Sat May 1 15:48:44 2021

 

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

 

target database Password:

connected to target database: DELL12PR (DBID=561887285)

auxiliary database Password:

connected to auxiliary database (not started)

 

RMAN> startup clone nomount

 

Oracle instance started

 

Total System Global Area    3590324224 bytes

 

Fixed Size                     2930608 bytes

Variable Size                889194576 bytes

Database Buffers            2684354560 bytes

Redo Buffers                  13844480 bytes

 

RMAN> duplicate target database for standby from active database;

 

Starting Duplicate Db at 01-MAY-21

using target database control file instead of recovery catalog

allocated channel: ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: SID=35 instance=acer12dr1 device type=DISK

 

contents of Memory Script:

{

   backup as copy reuse

   passwordfile auxiliary format  '/u01/app/oracle/product/12c/db_1/dbs/orapwacer12dr1'   ;

}

executing Memory Script

 

Starting backup at 01-MAY-21

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=77 instance=dell12dr1 device type=DISK

Finished backup at 01-MAY-21

 

contents of Memory Script:

{

   sql clone "create spfile from memory";

   shutdown clone immediate;

   startup clone nomount;

   restore clone from service  'dell12dr_standby' standby controlfile;

}

executing Memory Script

 

sql statement: create spfile from memory

 

Oracle instance shut down

 

connected to auxiliary database (not started)

Oracle instance started

 

Total System Global Area    3590324224 bytes

 

Fixed Size                     2930608 bytes

Variable Size                905971792 bytes

Database Buffers            2667577344 bytes

Redo Buffers                  13844480 bytes

 

Starting restore at 01-MAY-21

allocated channel: ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: SID=35 instance=acer12dr1 device type=DISK

 

channel ORA_AUX_DISK_1: starting datafile backup set restore

channel ORA_AUX_DISK_1: using network backup set from service dell12dr_standby

channel ORA_AUX_DISK_1: restoring control file

channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:07

output file name=+DATAC1/ACER12DR/CONTROLFILE/current.1897.1071417077

output file name=+DATAC1/ACER12DR/CONTROLFILE/current.1895.1071417077

Finished restore at 01-MAY-21

 

contents of Memory Script:

{

   sql clone 'alter database mount standby database';

}

executing Memory Script

 

sql statement: alter database mount standby database

 

contents of Memory Script:

{

   set newname for clone tempfile  1 to new;

   switch clone tempfile all;

   set newname for clone datafile  1 to new;

   set newname for clone datafile  3 to new;

   set newname for clone datafile  4 to new;

   set newname for clone datafile  5 to new;

   set newname for clone datafile  6 to new;

   restore

   from service  'dell12dr_standby'   clone database

   ;

}

executing Memory Script

 

executing command: SET NEWNAME

 

renamed tempfile 1 to +DATAC1 in control file

 

executing command: SET NEWNAME

 

executing command: SET NEWNAME

 

executing command: SET NEWNAME

 

executing command: SET NEWNAME

 

executing command: SET NEWNAME

 

Starting restore at 01-MAY-21

using channel ORA_AUX_DISK_1

 

channel ORA_AUX_DISK_1: starting datafile backup set restore

channel ORA_AUX_DISK_1: using network backup set from service dell12dr_standby

channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set

channel ORA_AUX_DISK_1: restoring datafile 00001 to +DATAC1

channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:03

channel ORA_AUX_DISK_1: starting datafile backup set restore

channel ORA_AUX_DISK_1: using network backup set from service dell12dr_standby

channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set

channel ORA_AUX_DISK_1: restoring datafile 00003 to +DATAC1

channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:03

channel ORA_AUX_DISK_1: starting datafile backup set restore

channel ORA_AUX_DISK_1: using network backup set from service dell12dr_standby

channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set

channel ORA_AUX_DISK_1: restoring datafile 00004 to +DATAC1

channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01

channel ORA_AUX_DISK_1: starting datafile backup set restore

channel ORA_AUX_DISK_1: using network backup set from service dell12dr_standby

channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set

channel ORA_AUX_DISK_1: restoring datafile 00005 to +DATAC1

channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:02

channel ORA_AUX_DISK_1: starting datafile backup set restore

channel ORA_AUX_DISK_1: using network backup set from service dell12dr_standby

channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set

channel ORA_AUX_DISK_1: restoring datafile 00006 to +DATAC1

channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01

Finished restore at 01-MAY-21

 

contents of Memory Script:

{

   switch clone datafile all;

}

executing Memory Script

 

datafile 1 switched to datafile copy

input datafile copy RECID=16 STAMP=1071417095 file name=+DATAC1/ACER12DR/DATAFILE/system.1894.1071417085

datafile 3 switched to datafile copy

input datafile copy RECID=17 STAMP=1071417095 file name=+DATAC1/ACER12DR/DATAFILE/sysaux.1893.1071417089

datafile 4 switched to datafile copy

input datafile copy RECID=18 STAMP=1071417095 file name=+DATAC1/ACER12DR/DATAFILE/undotbs1.1892.1071417091

datafile 5 switched to datafile copy

input datafile copy RECID=19 STAMP=1071417095 file name=+DATAC1/ACER12DR/DATAFILE/undotbs2.1891.1071417093

datafile 6 switched to datafile copy

input datafile copy RECID=20 STAMP=1071417095 file name=+DATAC1/ACER12DR/DATAFILE/users.1890.1071417095

Finished Duplicate Db at 01-MAY-21

 

rman>

 

 

=====================================================================================

Set dataguard related parameters in the first standby database.

 

SQL> SELECT NAME,OPEN_MODE,DB_UNIQUE_NAME,DATABASE_ROLE FROM V$DATABASE;

 

NAME      OPEN_MODE            DB_UNIQUE_NAME                 DATABASE_ROLE

--------- -------------------- ------------------------------ ----------------

DELL12PR  READ ONLY WITH APPLY dell12dr                       PHYSICAL STANDBY

 

SQL> SHOW PARAMETER LOG_ARCHIVE_CONFIG

 

NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

log_archive_config                   string      dg_config=(dell12pr)

 

SQL>  ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(dell12pr,dell12dr,acer12dr)' SCOPE=BOTH sid='*';

 

System altered.

 

SQL> SHOW PARAMETER LOG_ARCHIVE_CONFIG

 

NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

log_archive_config                   string      DG_CONFIG=(dell12pr,dell12dr,acer12dr)

 

Set the Log archive dest

 

SQL> alter system set log_archive_dest_3='service=acer12dr VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) LGWR SYNC AFFIRM db_unique_name=acer12dr' scope=both sid='*';

System altered.

 

 

SQL> show parameter LOG_ARCHIVE_DEST_3

 

NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

log_archive_dest_3                   string      service=acer12dr, VALID_FOR=(S

                                                 TANDBY_LOGFILES,STANDBY_ROLE)

                                                 LGWR SYNC AFFIRM db_unique_nam

                                                 e=acer12dr

 

 

 

 

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_3='ENABLE' SCOPE=BOTH sid='*';

 

System altered.

=============================================================

 

For the cascaded standby database, create pfile from the default spfile and remove the hidden parameters and include the RAC parameters.

Create pfile from deffault spfile

 

SQL> create pfile='/u01/app/oracle/product/12c/db_1/dbs/initacer12dr1.ora_new' from spfile='/u01/app/oracle/product/12c/db_1/dbs/spfileacer12dr1.ora';

File created.

 

Modify the pfile as below.

[oracle@vmachine-dr1 dbs]$ cat initacer12dr1.ora_new

*.audit_file_dest='/u01/app/oracle/admin/acer12dr/adump'

*.audit_trail='DB'

*.cluster_database=TRUE

*.compatible='12.1.0.2.0'

*.connection_brokers='((TYPE=DEDICATED)(BROKERS=1))','((TYPE=EMON)(BROKERS=1))'# connection_brokers default value

*.control_files='+DATAC1/ACER12DR/CONTROLFILE/current.1897.1071417077','+DATAC1/ACER12DR/CONTROLFILE/current.1895.1071417077'#Restore Controlfile

*.core_dump_dest='/u01/app/oracle/diag/rdbms/acer12dr/acer12dr1/cdump'

*.db_block_size=8192

*.db_create_file_dest='+DATAC1'

*.db_domain=''

*.db_name='dell12pr'

*.db_recovery_file_dest='+DATAC1'

*.db_recovery_file_dest_size=4785M

*.db_unique_name='acer12dr'

*.diagnostic_dest='/u01/app/oracle'

*.dispatchers='(PROTOCOL=TCP) (SERVICE=acer12drXDB)'

*.log_archive_config='DG_CONFIG=(dell12pr,dell12dr,acer12dr)'

*.log_archive_format='%t_%s_%r.dbf'

*.log_buffer=13012K# log buffer update

*.open_cursors=300

*.optimizer_dynamic_sampling=2

*.optimizer_mode='ALL_ROWS'

*.pga_aggregate_target=1140M

*.plsql_warnings='DISABLE:ALL'# PL/SQL warnings at init.ora

*.processes=300

*.query_rewrite_enabled='TRUE'

*.remote_login_passwordfile='EXCLUSIVE'

*.result_cache_max_size=17536K

*.sga_target=3424M

*.skip_unusable_indexes=TRUE

acer12dr2.thread=2

acer12dr1.thread=1

acer12dr1.undo_tablespace='UNDOTBS1'

acer12dr2.undo_tablespace='UNDOTBS2'

acer12dr2.instance_number=2

acer12dr1.instance_number=1

 

Create spfile in standard location from this pfile  as below

SQL> create spfile='+datac1/ACER12DR/parameterfile/spfileacer12dr1.ora' from pfile='/u01/app/oracle/product/12c/db_1/dbs/initacer12dr1.ora_new';

File created.

 

[oracle@vmachine-dr1 dbs]$ srvctl start database -d acer12dr

 

[oracle@vmachine-dr1 dbs]$ srvctl status database -d acer12dr

Instance acer12dr1 is running on node vmachine-dr1

Instance acer12dr2 is running on node vmachine-dr2

On primary database

SQL>  create table CASCADE_TAB1 (name varchar2(100));
Table created.

SQL> insert into CASCADE_TAB1 values ('THIS TABLE IS FROM PRIMARY DATABASE');
1 row created.

SQL> commit;
Commit complete.

SQL> alter system switch all logfile;
System altered.

SQL> /
System altered.
SQL> /
System altered.

============================================================

In the First standby database.

DATABASE_HOST                  DB_NAME           DATABASE_ROLE    OPEN_MODE           STARTUP_TIME

------------------------------ ---------------- ---------------- -------------------- --------------------

vmachine-dr1.localdomain dell12dr1        PHYSICAL STANDBY READ ONLY WITH APPLY 01-MAY-21

vmachine-dr2.localdomain dell12dr2        PHYSICAL STANDBY READ ONLY WITH APPLY 01-MAY-21

 

SQL> SQL>

   INST_ID PROCESS   STATUS          THREAD#  SEQUENCE#     BLOCK#     BLOCKS

---------- --------- ------------ ---------- ---------- ---------- ----------

         1 MRP0      APPLYING_LOG          2         22         14     102400

 

SQL> SQL>   2    3

 

   THREAD#   ARCHIVED    APPLIED        GAP

---------- ---------- ---------- ----------

         1         32         32          0

         2         21         20          1

 

SQL> select * from CASCADE_TAB1;

 

NAME

----------------------------------------------------------------------------------------------------

THIS TABLE IS FROM PRIMARY DATABASE

 

 

=============================================================

In the Cascaded Standby database database 

 

 

 

[oracle@vmachine-dr1 ~]$ srvctl modify database -d acer12dr -startoption 'READ ONLY'

[oracle@vmachine-dr1 ~]$ srvctl stop database -d acer12dr

srvctl start database -d acer12dr

[oracle@vmachine-dr1 ~]$ srvctl start database -d acer12dr

[oracle@vmachine-dr1 ~]$ sqlplus / as sysdba

 

SQL*Plus: Release 12.1.0.2.0 Production on Tue May 4 15:35:09 2021

 

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

 

 

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,

Advanced Analytics and Real Application Testing options

 

SQL> set lines 200

col DATABASE_HOST for a30;

col HOST_NAME for a15;

col DATABASE_ROLE for a10

col OPEN_MODE for a10

col STARTUP_TIME for a20

SELECT i.HOST_NAME "DATABASE_HOST" ,i.INSTANCE_NAME "DB_NAME", d.DATABASE_ROLE " DATABASE_ROLE", d.OPEN_MODE " OPEN_MODE ", STARTUP_TIME

from GV$DATABASE d, gv$instance i where i.INST_ID=d.INST_ID;

 

select inst_id,process, status, thread#, sequence#, block#, blocks from gv$managed_standby where process='MRP0';

 

select a.thread#, (select max (sequence#) from v$archived_log where archived='YES' and thread#=a.thread#) archived,max(a.sequence#) applied,

(select max(sequence#) from v$archived_log where archived='YES' and thread#=a.thread#)-max(a.sequence#)gap

from v$archived_log a where a.applied='YES' group by a.thread# order by thread#;

SQL> SQL> SQL> SQL> SQL> SQL>   2

DATABASE_HOST                  DB_NAME           DATABASE_ROLE    OPEN_MODE           STARTUP_TIME

------------------------------ ---------------- ---------------- -------------------- --------------------

vmachine-dr1.localdomain acer12dr1        PHYSICAL STANDBY READ ONLY            04-MAY-21

vmachine-dr2.localdomain acer12dr2        PHYSICAL STANDBY READ ONLY            04-MAY-21

 

SQL> SQL>

no rows selected

 

SQL> SQL>   2    3

   THREAD#   ARCHIVED    APPLIED        GAP

---------- ---------- ---------- ----------

         1         53         52          1

         2         37         37          0

 

SQL> select * from CASCADE_TAB1;

 

NAME

----------------------------------------------------------------------------------------------------

THIS TABLE IS FROM PRIMARY DATABASE

 

 

 

 




  

Tuesday, May 11, 2021

Installing 12c database binaries in silent mode in RAC enviroment

 


We generally install the database software in GUI mode. But incase if we need to to install the binaries in silent mode, we can follow this steps

Environment :  Two NODE RAC with 19c GRID installed.


1. Downloaded 12.1 software and unzipped the software in this location.

[oracle@vmachine-19crac1 ~]$/u01/software/12.1/database


[oracle@vmachine-19crac1 ~]$ cd /u01/software/12.1/database/
[oracle@vmachine-19crac1 database]$ ls -ltr
total 24
-rwxr-xr-x.  1 oracle oinstall  500 Feb  6  2013 welcome.html
-rwxr-xr-x.  1 oracle oinstall 8533 Jul  7  2014 runInstaller
drwxr-xr-x.  2 oracle oinstall   34 Jul  7  2014 rpm
drwxrwxr-x.  2 oracle oinstall   29 Jul  7  2014 sshsetup
drwxr-xr-x. 14 oracle oinstall 4096 Jul  7  2014 stage
drwxr-xr-x.  4 oracle oinstall 4096 Feb 20 09:38 install

drwxrwxr-x.  2 oracle oinstall   83 May 10 15:32 response

=========================================================


2) Create 12c oracle home directory on the all the nodes of the clusters.

Node1: 
[oracle@vmachine-19crac1 ~]$ mkdir -p /u01/app/oracle/product/slient_12c


Node2 :

[oracle@vmachine-19crac2 ~]$ mkdir -p /u01/app/oracle/product/slient_12c

3)  Make a copy of the sample response file .

Database software comes with default response file which we can copy and modif it as per our requirement.s

[oracle@vmachine-19crac1 ~]$ cd /u01/software/12.1/database/
[oracle@vmachine-19crac1 database]$ ls -ltr
total 24
-rwxr-xr-x.  1 oracle oinstall  500 Feb  6  2013 welcome.html
-rwxr-xr-x.  1 oracle oinstall 8533 Jul  7  2014 runInstaller
drwxr-xr-x.  2 oracle oinstall   34 Jul  7  2014 rpm
drwxrwxr-x.  2 oracle oinstall   29 Jul  7  2014 sshsetup
drwxr-xr-x. 14 oracle oinstall 4096 Jul  7  2014 stage
drwxr-xr-x.  4 oracle oinstall 4096 Feb 20 09:38 install
drwxrwxr-x.  2 oracle oinstall   83 May 10 15:32 response

[oracle@vmachine-19crac1 database]$ cd response
[oracle@vmachine-19crac1 database]$ cp db_install.rsp  silent_12c.rsp
[oracle@vmachine-19crac1 response]$ ls -ltr
total 124
-rwxrwxr-x. 1 oracle oinstall  6038 Jan 23  2014 netca.rsp
-rwxrwxr-x. 1 oracle oinstall 74822 Apr  4  2014 dbca.rsp
-rw-rw-r--. 1 oracle oinstall 25036 Jul  7  2014 db_install.rsp
-rw-r--r--  1 oracle oinstall  9534 May 10 15:32 silent_12c.rsp

=========================================================

4) Modify the response file as below.


[oracle@vmachine-19crac1 ~]$ cat /u01/software/12.1/database/response/silent_12c.rsp

#########################################################
#-------------------------------------------------------------------------------
# Do not change the following system generated value.
#-------------------------------------------------------------------------------
oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v12.1.0
oracle.install.option=INSTALL_DB_SWONLY
ORACLE_HOSTNAME=vmachine-19crac1
UNIX_GROUP_NAME=oinstall
INVENTORY_LOCATION=/u01/app/oraInventory
SELECTED_LANGUAGES=en
ORACLE_HOME=/u01/app/oracle/product/slient_12c
ORACLE_BASE=/u01/app/oracle
oracle.install.db.InstallEdition=EE
oracle.install.db.DBA_GROUP=oinstall
oracle.install.db.OPER_GROUP=oinstall
oracle.install.db.BACKUPDBA_GROUP=oinstall
oracle.install.db.DGDBA_GROUP=oinstall
oracle.install.db.KMDBA_GROUP=oinstall
oracle.install.db.rac.configurationType=ADMIN_MANAGED
oracle.install.db.CLUSTER_NODES=vmachine-19crac1,vmachine-19crac2
SECURITY_UPDATES_VIA_MYORACLESUPPORT=false
DECLINE_SECURITY_UPDATES=true

=========================================================

5) Run the runInstaller in Silent mode as below.


[oracle@vmachine-19crac1 database]$ ./runInstaller -silent -ignorePrereq  -responseFile /u01/software/12.1/database/response/silent_12c.rsp

Starting Oracle Universal Installer...

Checking Temp space: must be greater than 500 MB.   Actual 19011 MB    Passed

Checking swap space: must be greater than 150 MB.   Actual 20479 MB    Passed

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2021-05-10_03-32-40PM. Please wait ...[oracle@vmachine-19crac1 database]$ You can find the log of this install session at:

 /u01/app/oraInventory/logs/installActions2021-05-10_03-32-40PM.log

The installation of Oracle Database 12c was successful.

Please check '/u01/app/oraInventory/logs/silentInstall2021-05-10_03-32-40PM.log' for more details.

As a root user, execute the following script(s):

        1. /u01/app/oracle/product/slient_12c/root.sh

Execute /u01/app/oracle/product/slient_12c/root.sh on the following nodes:

[vmachine-19crac1, vmachine-19crac2]

Successfully Setup Software.

=========================================================


6) Run the root scripts as below.

NODE 1:

[root@vmachine-19crac1 ~]# /u01/app/oracle/product/slient_12c/root.sh

Check /u01/app/oracle/product/slient_12c/install/root_vmachine-19crac1_2021-05-10_19-29-33.log for the output of root script

[root@vmachine-19crac1 ~]# cat /u01/app/oracle/product/slient_12c/install/root_vmachine-19crac1_2021-05-10_19-29-33.log
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/slient_12c
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.

Now product-specific root actions will be performed.


=========================================================

NODE 2.


[root@vmachine-19crac1 ~]# ssh vmachine-19crac2
root@vmachine-19crac2's password:
Last login: Mon May 10 19:26:02 2021
[root@vmachine-19crac2 ~]# /u01/app/oracle/product/slient_12c/root.sh
Check /u01/app/oracle/product/slient_12c/install/root_vmachine-19crac2_2021-05-10_19-29-51.log for the output of root script

[root@vmachine-19crac2 ~]#


[root@vmachine-19crac2 ~]# cat /u01/app/oracle/product/slient_12c/install/root_vmachine-19crac2_2021-05-10_19-29-51.log
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/slient_12c
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.

[root@vmachine-19crac2 ~]#


7) Check if the software is installed in the oracle home 


Node 1.

[oracle@vmachine-19crac1 response]$ cd /u01/app/oracle/product/
[oracle@vmachine-19crac1 product]$ ls -ltr
total 4
drwxr-xr-x.  3 oracle oinstall   18 Feb 19 22:26 12c
drwxr-xr-x.  3 oracle oinstall   18 Feb 19 22:26 19c
drwxr-xr-x   3 oracle oinstall   18 Mar  5 16:53 11.2.0.4
drwxr-xr-x  70 oracle oinstall 4096 May 10 15:35 slient_12c
[oracle@vmachine-19crac1 product]$ cd slient_12c
[oracle@vmachine-19crac1 slient_12c]$ ls




Node2:

[root@vmachine-19crac2 ~]# cd /u01/app/oracle/product/slient_12c/

[root@vmachine-19crac2 slient_12c]# ls

You can now apply the latest patch and create the database .


Hope this document helps .

Thanks
Sam

=========================================================





Wednesday, March 24, 2021


Oracle Grid Rolling Upgrade from 18c to 19c.
  

 

   


We can use this document to upgrade grid infrastructure from 18c to 19c in rolling fashion.

In this post, We will be upgrading oracle 18c to 19.8.0.0 (19c with July,2020  BP). lets check out the steps.

Preupgrade checks : Below are the prequisites that we need to check before we plan for the grid upgrade.

1) Check if /u01 has enough free space.

It is advised to have minimum of 30GB to 50GB free space in /u01. 

2) Patches to be applied before upgrading 19c.

You can check all the below doc to get the list of all the patches that you need to applied before upgrading to 19c.

Patches to apply before upgrading Oracle GI and DB to 19c or downgrading to previous release (Doc ID 2539751.1)


Check if patch 28553832 is already applied to Grid home. If its not applied, please apply it.
This patch is required on 18c grid home to upgrade it to 19c.

[oracle@node1 ~]$ . oranev
[oracle@node1 ~]$ +ASM
[oracle@node1 ~]$ $ORACLE_HOME/OPatch/opatch lsinv | grep 28553832 28540666, 28544237, 28545898, 28553832, 28558622, 28562244, 28567875

3) Minimum Grid version : For 18c DB : Oracle Database and Grid Infrastructure 18c, minimum version 18.3.0.0.180717

4) Set SGA in ASM to minimum supported value.
 It is necessary to have at least 3GB for SGA to correctly upgrade (and runs) the GI 19c version. 

sqlplus / as sysasm
SQL> show parameter sga_max_size 
SQL> show parameter sga_target 

Set them if they are not at least of 3Gb
SYS@+ASM1> alter system set sga_max_size = 3G scope=spfile sid='*';
SYS@+ASM1> alter system set sga_target = 3G scope=spfile sid=
'*';

5) Verify no active rebalance is running

SYS@+ASM1> select count(*) from gv$asm_operation;
COUNT(*)
----------
       0

6) Download the required software’s and stage them on first node of the cluster.

a) Download from Oracle Software Delivery Cloud (https://edelivery.oracle.com) 

GRID Software:  V982068-01.zip ( Oracle Database Grid Infrastructure 19.3.0.0.0" for Linux x86-64)

b) Latest 19c Grid Infrastructure Release Update (RU), July 2020 RU - Patch 31305339: GI RELEASE UPDATE 19.8.0.0.0

c) Latest OPatch release, Patch 6880880, for 19.x and all other database versions used.

- p6880880_190000_Linux-x86-64.zip
- https://updates.oracle.com/download/6880880.html

d)     Copy the software’s to /export/Sam/19c-Grid/

7) Create the required directories

As root user on Node 1
mkdir -p /u01/app/19.0.0.0/grid

If it’s a Exadata machine, you can use dcli command to create directories in all the nodes in one shot.

dcli -l root -g /root/dbs_group   mkdir -p /u01/app/19.0.0.0/grid
dcli -l root -g /root/dbs_group  chown oracle:oinstall /u01/app/19.0.0.0/grid

8) Extract the Grid software.

The 19c Grid Software is extracted directly to the Grid Home. The grid runInstaller option is no longer supported. Run the following command on the database server where the software is staged.

As Grid user on node 1

unzip -q /export/Sam/19c-Grid/V982068-01.zip -d /u01/app/19.0.0.0/grid

9) Run the Cluster verification utility.

As Grid user on node 1.

cd /u01/app/19.0.0.0/grid/
unset ORACLE_HOME ORACLE_BASE ORACLE_SID

$ ./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/18.0.0.0/grid -dest_crshome /home/19.0.0.0/grid -dest_version 19.8.0.0.0 -fixup -verbose

When issues are discovered after running Cluster Verification Utility (CVU) , a runfixup.sh script is generated in /tmp/CVU_19.0.0.0.0_grid directory. Please be aware this script makes changes to your environment. You will be given the opportunity to run this later in the section "Actions to take before executing gridSetup.sh on each database server".

10) ACFS filesystem 

If you have some mountpoint over ACFS, it is recommended to stop it “safely” before executing the gridsetup. 

This occurs because during the upgrade phase the script will try to shutdown the entire clusters in the node,  

and if the unmount of ACFS goes wrong you will receive an error (and this can be boring and stressful to handle). 

[oracle@Node1 ~]$ $ORACLE_HOME/bin/crsctl stat res -t |grep acfs
ora.datac1.acfsvol01.acfs

ONLINE  ONLINE       Node1            mounted on /acfs01,S
ONLINE  ONLINE       Node2            mounted on /acfs01,S
ONLINE  ONLINE       Node3            mounted on /acfs01,S
ONLINE  ONLINE       Node4            mounted on /acfs01,S
ora.datac1.ghchkpt.acfs
[oracle@Node1 ~]$ $ORACLE_HOME/bin/crsctl stat res -w "TYPE = ora.acfs.type" -p | grep VOLUME
AUX_VOLUMES=CANONICAL_VOLUME_DEVICE=/dev/asm/acfsvol01-311
VOLUME_DEVICE=/dev/asm/acfsvol01-311

Check and Stop the acfs files system  

/u01/app/18.0.0/grid/bin/srvctl stop filesystem -d =/dev/asm/acfsvol01-311 -n Node1
/u01/app/18.0.0/grid/bin/srvctl stop filesystem -d =/dev/asm/acfsvol01-311 -n Node2
/u01/app/18.0.0/grid/bin/srvctl stop filesystem -d =/dev/asm/acfsvol01-311 -n Node3
/u01/app/18.0.0/grid/bin/srvctl stop filesystem -d =/dev/asm/acfsvol01-311 -n Node4

11) Put the cluster in Blackout in OEM and comment out any Cron jobs which might sent alerts

12) Prepare applying 2020 July RU to the 19c Grid home on node1

This is something to take a note. It is possible to patch the new GI even before install the 19c GI . It is recommended by the way.  This is a cool update from oracle as it saves time. If you are working on 8 node cluster, it will save your effort and time for patching in all the nodes as when you update grid with gridSetup.sh with  -applyRU , it will patch the existing home first and then it invokes GUI and it copies the patched version to all the other nodes.

a) Unzip the July, 2020 RU

cd /export/Sam/19c-Grid/
unzip p31305339_190000_Linux-x86-64.zip 
ls -ltr
grid@node1 +ASM1]$ ls -l
total 232
drwxr-x---  7 grid oinstall   4096 Oct  9 17:11 31305339
drwxr-x--- 14 grid oinstall   4096 Jan 24 15:23 OPatch-ORG
-rw-rw-r--  1 grid oinstall 225499 Oct 15 13:24 PatchSearch.xml

b)  Copy the latest Opatch to new GRID home.

As Root on Node1,
cd /u01/app/19.0.0.0/grid
mv OPatch OPatch.orig.date
cp /export/Sam/19c-Grid/p6880880_190000_Linux-x86-64.zip .
unzip p6880880_190000_Linux-x86-64.zip
chown -Rf oracle:oinstall OPatch
chmod 755 OPatch

13) Grid Upgrade :
Run gridSetup.sh with  -applyRU option to apply the patch and start the grid as below.

The installation log is located at /u01/app/oraInventory/logs.  For OUI installations or execution of critical scripts it is recommend to use VNC or SCREEN to avoid problems in case connection with the server is lost.

Perform these instructions as the Grid Infrastructure software owner (which is grid in this document) to install the 19c Grid Infrastructure software and upgrade Oracle Clusterware and ASM from 12.1.0.2, 12.2.0.1, 18.1.0.0 to 19c. The upgrade begins with Oracle Clusterware and ASM running and is performed in a rolling fashion. The upgrade process manages stopping and starting Oracle Clusterware and ASM and making the new 19c Grid Infrastructure Home the active Grid Infrastructure Home.

For systems with a standby database in place this step can be performed either before, at the same time or after extracting the Grid image file on the primary system.

applyRU: This will apply the Release Update when passed as parameter BEFORE start to install the 19c GI.

- mgmtDB parameters: With 19c the MGMTDB it is not needed anymore, these parameters disable the installation and even the options are not shown during the graphical installation. 


[grid@Node1 +ASM1]$
[grid@Node1 +ASM1]$ unset ORACLE_HOME
[grid@Node1 +ASM1]$ unset ORACLE_BASE
[grid@Node1 +ASM1]$ unset ORACLE_SID
[grid@Node1 ]$
[grid@Node1 ]$ cd /u01/app/19.0.0.0/grid/
[grid@Node1 ]$
[grid@Node1 ]$ export DISPLAY=xx.xx.xxx.xx:1.0
[grid@Node1 ]$ /u01/app/19.0.0.0/grid/gridSetup.sh -applyRU /export/Sam/19c-Grid/31305339 -J-Doracle.install.mgmtDB=false -J-Doracle.install.mgmtDB.CDB=false -J Doracle.install.crs.enableRemoteGIMR=false

This step will first apply the patch and will then start the upgrade , it will invoke the GUI for Grid upgrade.

This command will install the Patch and will start the GUI for GRID upgrade.

 




Select Nodes "Do not select - Skip upgrade on unreachable nodes and " Click "Next"



Skip EM Registeration Check - Do this after installation and Click "Next"






Un-check Root Script Execution - We will do this manually. Click "Next"



    


Rootupgrade.sh needs to be executed node by node .

 

NOTE: After rootupgrade.sh completes successfully on the local node, you can run the script in parallel on other nodes except for the last node. When the script has completed successfully on all the nodes except the last node, run the script on the last node. 

Do not run rootupgrade.sh on the last node until the script has run successfully on all other nodes.

NODE1 :  Stop all the database and ACFS mounts in all the nodes.

[root@Node1]# /u01/app/19.0.0.0/grid/rootupgrade.sh
Performing root user operation.
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/19.0.0.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
..
…..
………
2020/10/03 14:00:57 CLSRSC-474: Initiating upgrade of resource types
2020/10/03 14:02:09 CLSRSC-475: Upgrade of resource types successfully initiated.
2020/10/03 14:02:24 CLSRSC-595: Executing upgrade step 18 of 18: 'PostUpgrade'.
2020/10/03 14:02:34 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

------------------------------------------------------------------------------------

NODE2 :  Stop all the database and ACFS mounts in all the nodes.

[root@Node2 ~]# /u01/app/19.0.0.0/grid/rootupgrade.sh
Performing root user operation.
The following environment variables are set as:
 ORACLE_OWNER= grid
 ORACLE_HOME=  /u01/app/19.0.0.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
..
…..
…….
2020/10/03 14:45:04 CLSRSC-476: Finishing upgrade of resource types
2020/10/03 14:45:18 CLSRSC-477: Successfully completed upgrade of resource types
2020/10/03 14:45:45 CLSRSC-595: Executing upgrade step 18 of 18: 'PostUpgrade'.

Successfully updated XAG resources.
2020/10/03 15:11:21 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

----------------------------------------------------------------------------------

NODE3 :  Stop all the database and ACFS mounts in all the nodes.

Once Node 2 completes, run rootupgrade on node 3

-------------------------------------------------------------------------------------

NODE4 :  Stop all the database and ACFS mounts in all the nodes.

Once node 3 also completes, run the rootupgrade on Node 4
-------------------------------------------------------------------------------------

Once rootupgrade.sh has been run in all the nodes, Continue with Graphical Installation and press OK.





14) Once the upgrade completes, check if the ASM is up and running in all the nodes and also Perform an extra check on the status of the Grid Infrastructure post upgrade by executing the following command from one of the compute nodes:


[root@Node1 ~]# /u01/app/19.0.0.0/grid/bin/crsctl check cluster -all

************************************************************
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[root@Node2 ~]# node-2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
************************************************************** 
[root@Node2 ~]# /u01/app/19.0.0.0/grid/bin/crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [19.0.0.0.0]

15) Check the lspatches 

[oracle@Node101 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
31228670;REBALANCE DISK RESYNC CAUSING LOST WRITE ORA-00600 [KDSGRP1] OR ORA-01172 ON CRASH RECOVERY
31281355;Database Release Update : 19.8.0.0.200714 (31281355)
31335188;TOMCAT RELEASE UPDATE 19.0.0.0.0 (31335188)
31305087;OCW RELEASE UPDATE 19.8.0.0.0 (31305087)
31304218;ACFS RELEASE UPDATE 19.8.0.0.0 (31304218)

16) COMPATIBLITY parameter (set after 1 week)

WAIT FOR 1 WEEK FOR THIS STEP
---------------------------------------

Please wait for 1 week and then you can check and set the COMPATIBILITY parameter in all the diskgroups to allow to use new features.
. oraenv
+ASM1
SELECT name AS diskgroup, substr(compatibility,1,12) AS asm_compat,
substr(database_compatibility,1,12) AS db_compat FROM V$ASM_DISKGROUP;

DISKGROUP                      ASM_COMPAT   DB_COMPAT
------------------------------ ------------ ------------
DATAC1                         18.0.0.0.0   11.2.0.4.0
RECOC1                         18.0.0.0.0   11.2.0.4.0

ALTER DISKGROUP DATAC1 SET ATTRIBUTE 'compatible.asm' = 19.3.0.0.0' ;
ALTER DISKGROUP RECOC1 SET ATTRIBUTE 'compatible.asm' = 19.3.0.0.0' ;

SELECT name AS diskgroup, substr(compatibility,1,12) AS asm_compat,
substr(database_compatibility,1,12) AS db_compat FROM V$ASM_DISKGROUP;       DISKGROUP                      ASM_COMPAT   DB_COMPAT
       ------------------------------ ------------ ------------
       DATAC1                         19.3.0.0.0   11.2.0.4.0       RECOC1                         19.3.0.0.0   11.2.0.4.0

17) Check and  set Flex ASM Cardinality is set to "ALL"

Note :       Starting release 12.2 ASM will be configured as "Flex ASM". By default Flex ASM cardinality is set to 3. 

This means configurations with four or more database nodes in the cluster might only see ASM instances on three nodes. Nodes without an ASM instance running on it will use an ASM instance on a remote node within the cluster. Only when the cardinality is set to “ALL”, ASM will bring up the additional instances required to fulfill the cardinality setting.

on NODE 1 
. oraenv
+ASM1 
srvctl config asm

output shoud have 
ASM instance count: ALL ( You should see this) and IF Not set it.
srvctl modify asm -count ALL

 

18)  Perform Inventory update

An inventory update is required to the 19c Grid Home because in 19c the cluster node names are not registered in the inventory. Older database version tools relied on node names from inventory. 

Please run the following command on the local node when using earlier releases of database with 19c GI.

Run this step on every node.

Node1

(grid)$ /u01/app/19.0.0.0/grid/oui/bin/runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList ORACLE_HOME=/u01/app/19.0.0.0/grid "CLUSTER_NODES={node1,node2,node3,node4}" CRS=true LOCAL_NODE=local_node

Node2

(grid)$ /u01/app/19.0.0.0/grid/oui/bin/runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList ORACLE_HOME=/u01/app/19.0.0.0/grid "CLUSTER_NODES={node1,node2,node3,node4}" CRS=true LOCAL_NODE=local_node

Node3

(grid)$ /u01/app/19.0.0.0/grid/oui/bin/runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList ORACLE_HOME=/u01/app/19.0.0.0/grid "CLUSTER_NODES={node1,node2,node3,node4}" CRS=true LOCAL_NODE=local_node

Node4

(grid)$ /u01/app/19.0.0.0/grid/oui/bin/runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList ORACLE_HOME=/u01/app/19.0.0.0/grid "CLUSTER_NODES={node1,node2,node3,node4}" CRS=true LOCAL_NODE=local_node

19)  Disable Diagsnap for Exadata

Due to Bugs 24900613, 25785073 and 25810099, Diagsnap should be disabled for Exadata. 

(grid)$ cd /u01/app/19.0.0.0/grid/bin

(grid)$ ./oclumon manage -disable diagsnap

 

20) Edit the oratab to include the 19c for GRID HOME 

21) Remove the blackouts

22) Start the listener and all the databases


**********************************************************                       -------- END OF UPGRADE --------------**********************************************************

Hope this document helps you. 


In the next post, I am going to show how to downgrade the above upgraded grid back to 18c. 



Thanks

Sam