Thursday 6 February 2014

Data Guard Far Sync Standby

Oracle Database 12c has a new type of standby database - the far sync standby.
This type of database does not have any datafiles, all it does is receive archived logs from primary, and retransmit. Nor does it perform  any media recovery, but it can compress the logs during re-transmission. It is transparent to the dba during switchover/failover operations.

This type of database is useful when primary and standby are too far apart to allow redo transport in synchronous mode. The recommended set up is synchronous redo transport between primary and its far sync standby, and asynchronous transport to the actual failover/swictchover target.

This arrangement allows zero data loss configuration across large distances because there is efficient synchronous redo transport from primary to far sync standby.

Far sync standby also relieves primary from the overhead of transmitting redo to multiple sites.

Always, the far sync instance is located close enough to the primary redo source to allow synchronous redo transport.

For the scenario where the physical and terminal standby have exchanged their roles, its necessary to have a far sync standby close to the new primary, to receive its redo synchronously, and to transmit the redo to the new standby.


Below is a discussion on the creation of a far sync standby database.


Scenario:
There is a CDB called CDB2 in a Data Guard Max Performance configuration using a physical standby. The primary and terminal standby are already in a data guard configuration using the Max Performance protection.

A Far sync db site will be inserted into the configuration, and the configuration will be upgraded to Max Availability.



Query issued at Primary showing initial configuration
-----------------------------------------------------------------


SQL> select * from v$dataguard_config;

    DB_UNIQUE_NAME   PARENT_DBUN   DEST_ROLE         CURRENT_SCN CON_ID
    ---------------- ------------- ----------------- ----------- ------
    SCOTT            NONE          PRIMARY DATABASE       888526      0
    TIGER            SCOTT         PHYSICAL STANDBY       888529      0

SQL> select protection_level from v$database;

    PROTECTION_LEVEL
    --------------------
    MAXIMUM PERFORMANCE


A far sync stanbdy (FSS) will be added to the configuration, to be in sync redo transport with the primary SCOTT. It will forward the recieved redo asynchronously to the terminal standby identified by TIGER.

 There will be eventually 3 sites in the configuration
    1. A Primary database - identified by the unique name SCOTT
    2. A Far Sync standby - identified by the unique name FSS
    3. A terminal standby  - identified by the unique name TIGER
   

Steps to add a Data Guard Far Sync standby instance

1. On all 3 sites, create a tnsnames entry for the new far sync standby instance, called CDB2_FSS.

Already tnsnames entry would exist for the original primary (CDB2_SCOTT) and original standby (CDB2_TIGER) from the previous data guard set up.
   
    CDB2_FSS =
      (DESCRIPTION =
        (ADDRESS_LIST =
          (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.107)(PORT = 1521))
        )
        (CONNECT_DATA =
          (SERVICE_NAME = FSS)
        )
      )
 

2. On the far sync host, create a static listener registration entry for the far-sync standby instance

    In the netmgr gui, go to the listener database services configuraiton, and click Add database service.
    Supply these values in the textboxes as indicated
   
    For example, the listener.ora static registration entry could be:
        Global Database Name : FSS
        Oracle Home          : /u01/app/oracle/product/12.1.0/dbhome_1
        SID                  : CDB2

On the far sync host again, re-load the listener and check the static registration of the service FSS
-----------------------------------------------------------------------------------------------------
[oracle@laboms oradata]$ lsnrctl reload

LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 05-FEB-2014 18:27:23

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=laboms.example.com)(PORT=1521)))
The command completed successfully

   
[oracle@laboms oradata]$ lsnrctl services

LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 05-FEB-2014 18:48:20

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=laboms.example.com)(PORT=1521)))
Services Summary...
Service "FSS" has 1 instance(s).
  Instance "CDB2", status UNKNOWN, has 1 handler(s) for this service...

 Handler(s):
"DEDICATED" established:0 refused:0
LOCAL SERVER
...
...

 

3. Copy the primary site's Oracle password file from primary site to the far sync site using ftp

This can be done using scp/ftp

4. Check that you are able to connect to it using tnsnames

    [oracle@laboms ~]$ sqlplus sys/oracle@cdb2_ffs as sysdba
   
    SQL*Plus: Release 12.1.0.1.0 Production on Tue Feb 4 14:35:04 2014
   
    Copyright (c) 1982, 2013, Oracle.  All rights reserved.
   
    Connected to an idle instance.

5. Modify primary Data Guard parameters for sending archive logs to the far sync instance

log_archive_config: 
Purpose: This is the list of db_unique_names of sites partitipating in the configuration.
Set this to DG_CONFIG=(scott,fss,tiger)'

log_archive_dest_1:
Purpose : Use fast recovery area for local archiving. Set the db_unique_name attribute to CDB2_SCOTT for the far sync instance.

Set this to 'location=use_db_recovery_file_dest db_unique_name=CDB2_SCOTT';

log_archive_dest_2: 
Purpose : Send archive logs to the  far sync instance synchronously.
Set this to 'service=CDB2_FSS SYNC valid_for(online_logfiles,primary_role) db_unique_name=FSS' 

6. Create a far-sync standby controlfile using the primary 

SQL>alter database create far sync instance controlfile as 'fss_control1.ctl';

   Database altered.


Next, copy the controlfile to the far sync site.

7. Copy the primary spfile as a pfile to far sync site and modify the following parameters.

    SQL> create pfile='/home/oracle/fss_pfile.ora' from spfile;
   
    File created.

Next, copy the modified pfile to the far sync standby site, and modify these parameters.
db_unique_name:

Purpose: uniquely identifies the far sync site.

Set this to 'fss'
log_archive_config:

Purpose: List of sites partitipating in redo transport

Set this to 'DG_CONFIG=(SCOTT,FSS,TIGER)'
log_archive_dest_1:

Purpose : Set local archiving destination. Far syns instance will locally archive the received files first.
Set this to 'location=use_db_recovery_file_dest db_unique_name=FSS'
log_archive_dest_2:

Purpose:  Set remote archiving destination. The far sync site archives asynchronously to terminal standby.

Set this to 'service=CDB2_TIGER ASYNC valid_for=(standby_logfiles,standby_role) db_unique_name=TIGER'
control_files:

Purpose: Set location of the far sync instance controlfiles
Set this to '/u01/app/oracle/oradata/cdb2_fss/fss_control1.ctl'
log_file_name_convert:

Purpose: Set primary-->far sync file path translation for sync standby redo log files.
Set this to '/u01/app/oracle/oradata/cdb2','/u01/app/oracle/oradata/cdb2_fss'
fal_server:

Purpose: Set the site for archive log gap resolution 
Set this to 'SCOTT'

8. Startup the far sync standby intance using the pfile

        We are using the net service name cdb2_fss created previously to connect to this instance and start it up.
       
        [oracle@fss cdb2_fss]$ sqlplus sys/oracle@cdb2_fss as sysdba
        SQL*Plus: Release 12.1.0.1.0 Production on Fri Feb 7 12:32:56 2014
        Copyright (c) 1982, 2013, Oracle.  All rights reserved.
        Connected to an idle instance.
        SQL> startup nomount pfile='fss_pfile.ora';
        ORACLE instance started.
        Total System Global Area 1570009088 bytes
        Fixed Size                  2288776 bytes
        Variable Size            1241514872 bytes
        Database Buffers          318767104 bytes
        Redo Buffers                7438336 bytes
        SQL>           

9. Mount the database and check the standby logs

    SQL> alter database mount;

    Database altered.



    SQL> select a.group#,substr(b.member,1,45) member,b.type,a.sequence# 

             from v$standby_log a join v$logfile b on (a.group#=b.group#);

        GROUP# MEMBER                                        TYPE     SEQUENCE#
    ---------- --------------------------------------------- ------- ----------
             3 /u01/app/oracle/oradata/cdb2_fss/root/sby1.or STANDBY          0
             4 /u01/app/oracle/oradata/cdb2_fss/root/sby4.or STANDBY          0
             5 /u01/app/oracle/oradata/cdb2_fss/root/sby5.or STANDBY          0


   So far, no logs have been received at the ffs site from primary.
   

Remember to create an spfile for the far sync instance using CREATE SPFILE from memory.

10. Check the archive destination status at the primary to see if its able to send archived logs to the far sync site.

    Initially it showed an error;
    SQL>  select dest_id, status, error, archived_seq# from v$archive_dest_status where dest_id in (1,2);
       DEST_ID STATUS    ERROR                                     ARCHIVED_SEQ#
    ---------- --------- -------------------------------------------------------
             1 VALID                                                          69
             2 ERROR ORA-01094: ALTER DATABASE or ALTER PLUGGABLE DATA         0
               BASE CLOSE inprogress. Connections not permitted
    
But soon it resolved by itself.
    
SQL> select dest_id, status,error, archived_seq# from    
      2  v$archive_dest_status where dest_id in (1,2);
       DEST_ID STATUS    ERROR  ARCHIVED_SEQ#
    ---------- --------- -----  -------------
             1 VALID                       75
             2 VALID                       75

 11. Force a log file switch on primary to test archive log transport

    SQL> alter system switch logfile;

    System altered.

    SQL> archive log list;
    Database log mode              Archive Mode
    Automatic archival             Enabled
    Archive destination            USE_DB_RECOVERY_FILE_DEST
    Oldest online log sequence     76
    Next log sequence to archive   77
    Current log sequence           77

12. Query the archive destination status on the primary, to see if the logs were archived successfully to the far sync instance

    SQL>  select dest_id, status, error, archived_seq# from v$archive_dest_status where dest_id in (1,2);
       DEST_ID STATUS     ERROR    ARCHIVED_SEQ#
    ---------- ---------- -------- -------------
             1 VALID                          76
             2 VALID                          76
             
    Examine the current protection level of the configuration
    ---------------------------------------------------------
   
    SQL> select protection_level from v$database;
    PROTECTION_LEVEL
    --------------------
    MAXIMUM PERFORMANCE

13. Upgrade the protection mode of the data guard configuration to max availability

    SQL> alter database set standby to maximize availability;
    Database altered.

14. Check the v$dataguard_config view to examine the data guard configuration

Below the data guard configuration status is queried, after a few more log file switches were done on primary for testing.


    SQL> select * from v$dataguard_config;

    DB_UNIQUE_NAME           PARENT_DBUN   DEST_ROLE         CURRENT_SCN CON_ID
    ------------------------ ------------ ----------------- ----------- ------
    SCOTT                    NONE          PRIMARY DATABASE       944193      0
    FSS                      SCOTT         FAR SYNC INSTANCE      944104      0
    TIGER                    FSS           PHYSICAL STANDBY       944104      0

 


Saturday 1 February 2014

RMAN 12c feature - network based restore

In 12c Data Guard environments, RMAN can recover/restore using a new network-enabled restore method. During a recovery/restore RMAN can now retrieve the datafiles/logs/backupsets that it needs for a recovery operation from its remote Data Guard counterpart.

The files needed for a restore/recovery operation are transferred to the requesting site using compressed backupsets to reduce network overhead. This technique is called network-based restore.

New syntax is introduced to support this. There is a new RESTORE FROM SERVICE <servicename> command to perform this, and it is used in memory scripts generated, for instance, during database duplication. The service name can point to a tns entry for the remote DG counterpart site.

RESTORE FROM SERVICE <dbSERVICE> <RESTOREOBJ>

Therefore, there are now two techniques available for database duplication, depending on mode of connection, number of auxiliary channels allocated and duplicate command-line options specified.
  • Existing image copy method (push based) where target channels do most of the duplication.
  • The new backupsets method (pull based) where the auxiliary channels do the bulk of the work.
Possible advantages of backupset method:
  • Auxiliary channels do the bulk of the duplication effort, so the target database - if its  a production database - does not incur the overhead of the operation. 
  • Backupsets are compact, smaller than datafiles, and are more efficient to transport over the network. User can specify that compressed backupsets be used.

The network based restore operation is seen in the rman memory scripts generated during the duplicaton of a 12c primary database as a standby.

The steps for creating a physical standby for a container database are the same as for noncdbs.
Data guard specific init params need to be set on both instnaces, tnsnames networking needs to be configured, standby logs, and archive log destinations need to be configured .

The auxiliary instance needs to be set up using a temp init.ora and temp password file, and should be  be running in nomount mode, registered statically with its listener.

The section below discusses the RMAN actions and memory scripts generated during the duplication of a CDB called CDB1 as a physical standby.

The method used here is active database duplication.


Connect to primary as target and to the standby as auxiliary from rman and issue the DUPLICATE DATABASE command.

$ > rman target sys/oracle@cdb1_scott auxiliary sys/oracle@cdb1_tiger
Issue the duplicate database command shown below.

duplicate target database for standby 
 from active database
 nofilenamecheck dorecover
  spfile
     parameter_value_convert 'scott','tiger'
      set db_unique_name 'TIGER'
      set log_archive_dest_2 'service=CDB1 async valid_for=(online_logfiles,primary_role) db_unique_name=CDB1';


The main duplication actions that follow are discussed below.
    

     1)  Copy the password file and spfile from primary (password file copy done implicitly when duplicating for creating physical standby)

contents of Memory Script: (edited for clarity)
     {
        backup as copy reuse
        targetfile  '/u01/.../dbhome_1/dbs/orapwCDB1' 

            auxiliary format '/u01/app/.../dbhome_1/dbs/orapwCDB1'   ;
        restore clone from service  'cdb1_scott' spfile to
            '/u01/app/.../dbhome_1/dbs/spfileCDB1.ora';
        sql clone "alter system set spfile= ''/u01/app/...dbhome_1/dbs/spfileCDB1.ora''";
    }

     2) Set up requested site-specific init params on standby as specified in the DUPLICATE ...SPFILE SET option.

    
          contents of Memory Script:
          {
             sql clone "alter system set  db_unique_name =  ''TIGER'' comment=
           '''' scope=spfile";
             sql clone "alter system set  log_archive_dest_2 = ''service=CDB1 async valid_for=(online_logfiles,primary_role) db_unique_name=CDB1'' comment=
           '''' scope=spfile";
             shutdown clone immediate;
             startup clone nomount;
          }    

     3) Restore primary controlfile to standby as a standby controlfile. Note that this is a network-based restore from an appropriate controlfile backupset.

          contents of Memory Script:
          {
             restore clone from service  'cdb1_scott' standby controlfile;
          }    
          channel ORA_AUX_DISK_1: starting datafile backup set restore
          channel ORA_AUX_DISK_1: using network backup set from service cdb1_scott
          channel ORA_AUX_DISK_1: restoring control file
          channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:07
          output file name=/u01/app/oracle/oradata/cdb1/control1.ctl
          output file name=/u01/app/oracle/oradata/cdb1/control2.ctl
          Finished restore at 02-FEB-14


     4) Mount the standby database using the restored controlfile.

          contents of Memory Script:
          {
             sql clone 'alter database mount standby database';
          }


     5) Restore all datafiles. RMAN uses the restore from service syntax

  
  executing Memory Script

  contents of Memory Script:
  {
     set newname for tempfile  1 to  "/u01/app/oracle/oradata/cdb1/temp.ora";
       ...
     set newname for tempfile  7 to  "/u01/app/oracle/oradata/cdb1/exnoncdb/temp.ora";
     switch clone tempfile all;

     set newname for datafile  1 to  "/u01/app/oracle/oradata/cdb1/system.ora";
     ...
     set newname for datafile  55 to  "/u01/app/oracle/oradata/cdb1/salespdb/ts_recovery.ora";

     restore  from service  'cdb1_scott'  clone database
     ;
     sql 'alter system archive log current';
  }   
  channel ORA_AUX_DISK_1: starting datafile backup set restore
  channel ORA_AUX_DISK_1: using network backup set from service cdb1_scott
  channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
  channel ORA_AUX_DISK_1: restoring datafile 00001 to /u01/app/oracle/oradata/cdb1/system.ora
  ...
  ...
  ...
  channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
  channel ORA_AUX_DISK_1: restoring datafile 00055 to /u01/app/oracle/oradata/cdb1/salespdb/ts_recovery.ora
  channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
  Finished restore at 02-FEB-14


  6) Recover the datafiles to the recovery point specified, if any.

    Note below, that the archive log files with sequence 59 and 60 are being pulled from primary.
    Then they are applied to the standby instance datafiles.
       
  contents of Memory Script:
  {
     restore clone force from service  'cdb1_scott'
          archivelog from scn  1871401;

     switch clone datafile all;
  }    
  channel ORA_AUX_DISK_1: starting archived log restore to default destination
  channel ORA_AUX_DISK_1: using network backup set from service cdb1_scott


  channel ORA_AUX_DISK_1: restoring archived log
  archived log thread=1 sequence=59
  channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
  channel ORA_AUX_DISK_1: starting archived log restore to default destination
  channel ORA_AUX_DISK_1: using network backup set from service cdb1_scott
  channel ORA_AUX_DISK_1: restoring archived log
  archived log thread=1 sequence=60
  channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
  Finished restore at 02-FEB-14

  contents of Memory Script:
 {
    set until scn  1873330;
    recover
    standby
    clone database
    delete archivelog
   ;
 }
 executing Memory Script

 executing command: SET until clause

 Starting recover at 02-FEB-14
 using channel ORA_AUX_DISK_1

 starting media recovery

 archived log for thread 1 with sequence 59 is already on disk as file /u01/.../arch1_59_837593794.dbf
 archived log for thread 1 with sequence 60 is already on disk as file /u01/.../arch1_60_837593794.dbf
 archived log file name=/u01/.../arch1_59_837593794.dbf thread=1 sequence=59
 archived log file name=/u01/.../arch1_60_837593794.dbf thread=1 sequence=60
 media recovery complete, elapsed time: 00:00:03
 Finished recover at 02-FEB-14
 Finished Duplicate Db at 02-FEB-14