Sunday 27 April 2014

Partial indexes in Oracle Database 12c.

In Oracle Database 12c, there is a potentially useful feature called partial index. This feature helps to save disk space by avoiding having indexes for specific partitions. The partial index can be useful, for example, during data load operations, to avoid index maintenance overhead during the load for the partitions which are being loaded.

In 12c, the partial indexing is exposed through new syntax for the CREATE TABLE DDL, which now supports an INDEXING attribute for each partition, which can be ON (the default) or OFF. Likewise, the CREATE INDEX statement supports syntax for creation of partial indexes.


When CREATE INDEX statement executes with the INDEXING PARTIAL syntax, the partitions which have INDEXING=OFF are not indexed. To explore the partial index feature we use a test container database named CDB1.

Container db: CDB1
Pluggable db: SALESPDB
Owner/table: SALES.sales

1. Creating a partitioned table with selective indexing.

In this table the indexing is turned off for partition p1 and its enabled for others.

create table sales ( time_id date, quantity number(5))
partition by range (time_id)
(
 partition p1 values less than (to_date('01012014','ddmmyyyy')) indexing off,
 partition p2 values less than (to_date('01062014','ddmmyyyy')) indexing on,
 partition p3 values less than (to_date('31122014','ddmmyyyy')) indexing on,
 partition p4 values less than (maxvalue)
)


2. Create a locally partitioned index on the time_id column (note the indexing partial syntax)


SQL> create index test_idx on sales(time_id) indexing partial local;
Index created.


3. Populate the table with data


insert into sales select to_date('01012010','ddmmyyyy')+ rownum, 100 from dual connect by level <=5000
SQL> /

5000 rows created.

SQL> commit;

Commit complete.


4. Examine the dba_ind_partitions view to see which index partitions have a segment.


SQL> select partition_name,segment_created, status from user_ind_partitions
  2  where index_name='TEST_IDX';

PARTITION_NAME       SEG STATUS
-------------------- --- --------
P1                   NO  UNUSABLE
P2                   YES USABLE
P3                   YES USABLE
P4                   YES USABLE

It is seen that segments do not exist for partition p1 because indexing was turned off for that partition. Further there index partition corresponding to the partition with indexing off is set as unusable.


Partial indexing and global indexes

Looking up the Oracle Database Concepts documentation, Chapter # 4, Partitions, Views, and Other Schema Objects, in the section titled Partial Indexes for Partitioned Tables, it is mentioned that partial global indexes are not affected or invalidated by partition maintenance operations on partitions having indexing=OFF.


From the documentation:
Before Oracle Database 12c, an exchange partition operation required a physical update of an associated global index to retain it as usable. Starting with Oracle Database 12c, if the partitions involved in a partition maintenance operation are not part of a partial global index, then the index remains usable without requiring any global index maintenance.

I tried to check this out but found otherwise. Below are the steps executed.

Step #1.

First, I created a table called sales range-partitioned on the time_id column with indexing turned off for all but one partition.

1  create table sales ( time_id date, region_id number(3), amt number(5))
  2  partition by range ( time_id )
  3  (
  4   partition p0 values less than (to_date('01012010','ddmmyyyy'))indexing on ,
  5   partition p1 values less than (to_date('01012014','ddmmyyyy')) indexing off,
  6   partition p2 values less than (to_date('01042014','ddmmyyyy')) indexing off,
  7   partition p3 values less than (to_date('01072014','ddmmyyyy')) indexing off,
  8   partition p4 values less than (to_date('01012015','ddmmyyyy')) indexing off,
  9   partition p5 values less than (maxvalue) indexing off;




Step #2.
I created a global partitioned index on the region_id column, having 4 partitions, and enabled the partial indexing option.

SQL> create index test_idx on sales(region_id)
  2   global     
  3   partition by range(region_id)
  4   (partition p1 values less than(2),
  5    partition p2 values less than(3),
  6    partition p3 values less than(4),
  7    partition p4 values less than(maxvalue)
  8  ) indexing partial;

Index created.

Step #3.
I checked the status of the table and index partitions that now existed. There are 4 index partitions corresponding to the 4 defined in the DDL. As for the table, there is NO data in the in any partition so far; and there are a total of 6 partitions. Only partition P0 has indexing enabled.

SQL> select partition_name,indexing from user_tab_partitions           
  2  where table_name='SALES';

PARTITION_NAME       INDE
-------------------- ----
P0                   ON
P1                   OFF
P2                   OFF
P3                   OFF
P4                   OFF
P5                   OFF

6 rows selected.

SQL> select partition_name,segment_created,status from user_ind_partitions
  2  where index_name='TEST_IDX';

PARTITION_NAME       SEG STATUS
-------------------- --- --------
P4                   YES USABLE
P3                   YES USABLE
P2                   YES USABLE
P1                   YES USABLE

Step #4.
I prepare a temporary heap organized table to stage the partition exchange load. I will soon load the data from this staging table to the sales table. The staging date dates are all > 01 Jan 2010, therefore when loaded into the sales table, it will be loaded into the table partition p1, which has indexing turned OFF.

SQL> insert into t_sales
  2   select to_date('01012010','ddmmyyyy') + rownum, 3, 100 from dual
  3   connect by level <=10;

10 rows created.

SQL> commit;

Commit complete.
SQL> select * from t_sales;

TIME_ID    REGION_ID        AMT
--------- ---------- ----------
02-JAN-10          3        100
03-JAN-10          3        100
04-JAN-10          3        100
05-JAN-10          3        100
06-JAN-10          3        100
07-JAN-10          3        100
08-JAN-10          3        100
09-JAN-10          3        100
10-JAN-10          3        100
11-JAN-10          3        100

10 rows selected.
Step #5.
In this step, I perform the partition exchange load operation. I do not specify a update global indexes clause because I want to see the effect of just doing a partition exchange on the global index.

SQL> alter table sales exchange partition p1 with table t_sales;

Table altered.

Examining the table data for the target partition

SQL> select * from sales partition(p1);

TIME_ID    REGION_ID        AMT
--------- ---------- ----------
02-JAN-10          3        100
03-JAN-10          3        100
04-JAN-10          3        100
05-JAN-10          3        100
06-JAN-10          3        100
07-JAN-10          3        100
08-JAN-10          3        100
09-JAN-10          3        100
10-JAN-10          3        100
11-JAN-10          3        100

10 rows selected.

Step #6.
Examine the status of the partitions of the global index, they are ALL unusable now.

SQL> select partition_name,segment_created,status from user_ind_partitions
  2  where index_name='TEST_IDX';

PARTITION_NAME       SEG STATUS
-------------------- --- --------
P4                   YES UNUSABLE
P3                   YES UNUSABLE
P2                   YES UNUSABLE
P1                   YES UNUSABLE

I have done a partition exchange on a partition with indexing=off (partition p1).
As per Oracle doc, it should not have invalidated any of the global index partitions which was created partial. Perhaps, I missed some detail - not sure. I would be thankful if someone can share any insights/experiences on that topic.


Wednesday 23 April 2014

Online administrative operations in Oracle Database 12c

Starting Oracle Database 12c, it is possible to do some DDL on tables even while DML operations are ongoing against it. In previous releases an alter table would have required downtime due to locking.

For example these operations can now be done online by adding the ONLINE keyword to the DDL:

ALTER TABLE ... SET UNUSED <column>
ALTER TABLE ... MOVE PARTITION <tablespace>
ALTER DATABASE MOVE DATAFILE
ALTER TABLE. ... DROP CONSTRAINT <constraint>

We will explore example of  the first three.

Example 1 Move a table partition online:


Container database : CDB1
PDB with target table: SALESPDB
Partitioned table owner/table name : sales.sales

Step 1. Create partitioned table

The table is range-partitioned on date, and has 6 partitions.
Each partition is for a quarter of 2014 and will be stored in the same tablespace, ts_sales.

The table ddl is shown below. There are 6 partitions and all reside in the same tablespace TS_SALES.

CREATE TABLE "SALES"."SALES"
   (    "TIME_ID" DATE,
        "AMOUNT" NUMBER(5,0)
   ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255

  PARTITION BY RANGE ("TIME_ID")
 (
   PARTITION "P1"  VALUES LESS THAN (TO_DATE('2014-01-01', 'YYYY-MM-DD')) TABLESPACE "TS_SALES" ,
   PARTITION "P2"  VALUES LESS THAN (TO_DATE('2014-03-31 00:00:00', 'YYYY-MM-DD')) TABLESPACE "TS_SALES" ,
   PARTITION "P3"  VALUES LESS THAN (TO_DATE('2014-06-30 00:00:00', 'YYYY-MM-DD')) TABLESPACE "TS_SALES" ,
   PARTITION "P4"  VALUES LESS THAN (TO_DATE('2014-09-30 00:00:00', 'YYYY-MM-DD')) TABLESPACE "TS_SALES" ,
   PARTITION "P5"  VALUES LESS THAN (TO_DATE('2014-12-31 00:00:00', 'YYYY-MM-DD')) TABLESPACE "TS_SALES" ,
   PARTITION "PMAX"  VALUES LESS THAN (MAXVALUE)
);


Step 2. Populate it with some trial data (optional)

insert into sales
 select to_date('01012014','ddmmyyyy') + rownum, 100 from dual connect by level <=365;
commit;

Step 3. Create a tablespace to which you will move a partition even as its avaialble for DML

SQL> conn sys/oracle@localhost/salespdb as sysdba
Connected.

SQL> create tablespace ts_trial datafile '/u01/app/oracle/oradata/cdb1/salespdb/ts_trial.ora' size 5m autoextend on;

Tablespace created.

Step 4. Simulate an ongoing workload inserting into a specific partition 

The script below creates 0.5 million rows in partition p4 by hard coding the time_id to 20th Sept 2014.

insert into sales select to_date('20140920','yyyymmdd'),300 
   from dual  connect by level<=500000;
commit;


Step 5. In another session move the partition which is being accessed  (partition p4)

First tried it without the ONLINE keyword -- it fails:

SQL> alter table sales.sales move partition p4 tablespace ts_trial;
alter table sales.sales move partition p4 tablespace ts_trial
                  *
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

Next try it with the ONLINE keyword, repeat the insert to make sure its inserting - this time, it is successful


SQL> alter table sales.sales move partition p4 tablespace ts_trial online;

Table altered.

Step 6. Count the number of records in it after the simulated workload has completed (optional)

SQL> select count(*) from sales.sales partition(p4);

  COUNT(*)
----------
    500092


The ONLINE ALTER TABLE capability makes table available for applications even when administrative actions are being done on it.

Some exceptions which prevent online ALTER TABLE:
Below are some of the conditions which prevent online ALTER TABLE modification.
  • Supplemental logging enabled
  • Table being altered has bitmap indexes 

Example 2:  Performing an ALTER TABLE... SET UNUSED (COLUMN)   online

In this example a third column is being added to the sales table and it will be set to UNUSED state even as a session is accessing the column.


SQL> alter table sales.sales add region_id number(3);

Table altered.


Like the first example, dml operation involving the soon-to-be-dropped column is started in a different session.

SQL> insert into sales 
  2  select to_date('22092014','ddmmyyyy'),333,3 from dual
  3  connect by level <=500000;
500000 rows created.


After the operation was underway, the alter table operation is attempted in the original session.
It results in an error.

SQL> alter table sales.sales set unused (region_id);
alter table sales.sales set unused (region_id)
                  *
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired


Restart the dml operation and retry the alter table with the online modifier -- it succeeds this time after the dml session commits. It waits and processes after the DML in the other session commits.

SQL> alter table sales set unused(region_id) online;

Table altered.


SQL> desc sales;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 TIME_ID                                            DATE
 AMOUNT                                      NUMBER(5)

Table columns can be set to UNUSED before dropping them in a maintenance window because dropping columns on large tables can be time consuming and resource intensive.

Example 3: Move a datafile online.

Starting Oracle Database 12c, it is possible to move a database datafile to a different location even as it is being accessed by the applications. To explore this feature using the test database, a trial workload is created in one session first to insert 900k records into the sales table. The contents of the test script ins.sql which will be run to simulate an active workload:

insert into sales select to_date('10092014','ddmmyyyy'),444
     from dual connect by level<=900000;
commit;

Starting the trial workload :

[oracle@laboms ~]$ sqlplus sales/sales@localhost:1521/salespdb

SQL*Plus: Release 12.1.0.1.0 Production on Wed Apr 23 17:08:43 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Last Successful login time: Wed Apr 23 2014 16:54:27 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options

SQL>
SQL> @ins.sql;

900000 rows created.

In another session, the tablespace into which the trial workload is inserting data is moved using the alter ... move datafile. The datafile ts_trial.ora is being moved to a different location.

SQL > alter database move datafile '/u01/app/oracle/oradata/cdb1/salespdb/ts_trial.ora' to '/u01/app/oracle/oradata/tmp/ts_trial.ora';

Database altered.

The datafile move was possible even as the workload was executing.

Checking the content of v$datafile after the alter the highlighted file has been moved to the requested location:


NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/cdb1/root/undo.ora
/u01/app/oracle/oradata/cdb1/salespdb/system.ora
/u01/app/oracle/oradata/cdb1/salespdb/sysaux.ora
/u01/app/oracle/oradata/cdb1/salespdb/users.ora
/u01/app/oracle/oradata/cdb1/salespdb/ts_sales.ora
/u01/app/oracle/oradata/tmp/ts_trial.ora

6 rows selected.

The online operations increase database availability even when administrative operations are being performed.

Monday 21 April 2014

Point-in-time recovery of a pluggable database.

Point-in-time recovery of a pluggable database:


In Oracle Database 12c, it is possible to do an incomplete recovery of a pluggable database without affecting other pluggable databases or the container database. The mechanism is similar to a database point-in-time recovery. An auxiliary instance is required to stage the restore and recovery.

The auxiliary instance consists of


  1. The entire set of tablespaces (system, sysaux and others) belonging to the target pdb.
  2. Tablespaces from the container database which are needed to perform point-in-time recovery (at least the system, sysaux, and undo).


After the restore and recovery is complete, the target pluggable database is at its desired recovery point.
The auxiliary instance gets dropped and pluggable database is ready to be opened for use.

Scenario:

Container db name : CDB1
Tenant dbs        : salespdb, hrpdb
Target pdb        : salespdb
Recovery point   : recover to a specific scn, at which a table sales.sales was known to exist in SALESPDB.

Pre-requisites :

  • The target pdb salespdb must be closed.
  • Adequate disk space should be provisioned to stage the auxiliary instance.
  • A backup of the pdb and the container db should have been taken before the recovery point.
  • Archive logs must exist to do media recovery from the time of the backup until the desired recovery point.


Procedure :
Connect to the root container from rman and execute the run block below.

RMAN> run {
2>  set until scn 1522385;
3>  restore pluggable database salespdb;
4>  recover pluggable database salespdb auxiliary destination '/u01/app/oracle/oradata/tmp';
5> }


Note:
In the run block above, a recovery point scn is specified first. This is the scn to which the named pdb will
be recovered. The recover statement in the run block also specifies an auxiliary destination folder for
staging the auxiliary instance.This run block will restore and recover pluggable database called salespdb to a specific recovery point.

After recovery, we will examines the record count of a dropped table after recovery to make sure its data has been recovered.

RMAN generates and executes a series of memory scripts to achieve the restore/recover.
The scripts and rman output are discussed below.


1) First, RMAN does an in-place restore of the pdb datafiles from an appropriate backupset according to the specified recovery point scn. 

RMAN> run {
2>  set until scn 1522385;
3>  restore pluggable database salespdb;
4>  recover pluggable database salespdb auxiliary destination '/u01/app/oracle/oradata/tmp';
5> }

executing command: SET until clause

Starting restore at 20-APR-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=33 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00011 to /u01/app/oracle/oradata/cdb1/salespdb/system.ora
channel ORA_DISK_1: restoring datafile 00012 to /u01/app/oracle/oradata/cdb1/salespdb/sysaux.ora
channel ORA_DISK_1: restoring datafile 00013 to /u01/app/oracle/oradata/cdb1/salespdb/users.ora
channel ORA_DISK_1: restoring datafile 00014 to /u01/app/oracle/oradata/cdb1/salespdb/ts_sales.ora
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/oradata/fra/CDB1/F75C25CAA70D1833E0430100007F21F9/backupset/2014_04_19/o1_mf_nnndf_TAG20140419T103559_9o3r9wnb_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/oradata/fra/CDB1/F75C25CAA70D1833E0430100007F21F9/backupset/2014_04_19/o1_mf_nnndf_TAG20140419T103559_9o3r9wnb_.bkp tag=TAG20140419T103559
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:25
Finished restore at 20-APR-14

Starting recover at 20-APR-14
using channel ORA_DISK_1
RMAN-05026: WARNING: presuming following set of tablespaces applies to specified Point-in-Time

List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS

Creating automatic instance, with SID='BvDq'

initialization parameters used for automatic instance:
db_name=CDB1
db_unique_name=BvDq_pitr_salespdb_CDB1
compatible=12.1.0.0.0
db_block_size=8192
db_files=200
sga_target=1G
processes=80
diagnostic_dest=/u01/app/oracle
db_create_file_dest=/u01/app/oracle/oradata/tmp
log_archive_dest_1='location=/u01/app/oracle/oradata/tmp'
enable_pluggable_database=true
_clone_one_pdb_recovery=true
#No auxiliary parameter file used


starting up automatic instance CDB1

Oracle instance started

Total System Global Area    1068937216 bytes

Fixed Size                     2296576 bytes
Variable Size                281019648 bytes
Database Buffers             780140544 bytes
Redo Buffers                   5480448 bytes
Automatic instance created

2) Next, RMAN prepares a controlfile for the auxiliary instance from a suitable backup of the CDB.

contents of Memory Script:
{
# set requested point in time
set until  scn 1522385;
# restore the controlfile
restore clone controlfile;
# mount the controlfile
sql clone 'alter database mount clone database';
}
executing Memory Script

executing command: SET until clause

Starting restore at 20-APR-14
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=75 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /u01/app/oracle/oradata/fra/CDB1/autobackup/2014_04_19/o1_mf_s_845289482_9o3rdc8b_.bkp
channel ORA_AUX_DISK_1: piece handle=/u01/app/oracle/oradata/fra/CDB1/autobackup/2014_04_19/o1_mf_s_845289482_9o3rdc8b_.bkp tag=TAG20140419T103802
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:09
output file name=/u01/app/oracle/oradata/tmp/CDB1/controlfile/o1_mf_9o64vdxg_.ctl
Finished restore at 20-APR-14

sql statement: alter database mount clone database

3) RMAN finalizes the datafiles for the auxiliary instance in this step by including any files from the CDB itself which may be needed for the recovery.

The system, undo, sysaux, and users tablespace from the container database are needed to create the auxiliary instance. Suitable backups of those CDB data files are being restored below to the auxiliary destination and assigned OMF names with set newname for ... to new. The auxiliary instance controlfile is updated with the new locations with a series of switch statements to prepare it for the media recovery stage.


contents of Memory Script:
{
# set requested point in time
set until  scn 1522385;
# switch to valid datafilecopies
switch clone datafile  11 to datafilecopy 
 "/u01/app/oracle/oradata/cdb1/salespdb/system.ora";
switch clone datafile  12 to datafilecopy 
 "/u01/app/oracle/oradata/cdb1/salespdb/sysaux.ora";
switch clone datafile  13 to datafilecopy 
 "/u01/app/oracle/oradata/cdb1/salespdb/users.ora";
switch clone datafile  14 to datafilecopy 
 "/u01/app/oracle/oradata/cdb1/salespdb/ts_sales.ora";
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile  1 to new;
set newname for clone datafile  5 to new;
set newname for clone datafile  3 to new;
set newname for clone datafile  6 to new;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile  1, 5, 3, 6;
switch clone datafile all;
}
executing Memory Script

executing command: SET until clause

datafile 11 switched to datafile copy
input datafile copy RECID=1 STAMP=845367813 file name=/u01/app/oracle/oradata/cdb1/salespdb/system.ora

datafile 12 switched to datafile copy
input datafile copy RECID=2 STAMP=845367813 file name=/u01/app/oracle/oradata/cdb1/salespdb/sysaux.ora

datafile 13 switched to datafile copy
input datafile copy RECID=3 STAMP=845367813 file name=/u01/app/oracle/oradata/cdb1/salespdb/users.ora

datafile 14 switched to datafile copy
input datafile copy RECID=4 STAMP=845367813 file name=/u01/app/oracle/oradata/cdb1/salespdb/ts_sales.ora

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 20-APR-14
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /u01/app/oracle/oradata/tmp/CDB1/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00005 to /u01/app/oracle/oradata/tmp/CDB1/datafile/o1_mf_undotbs_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00003 to /u01/app/oracle/oradata/tmp/CDB1/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00006 to /u01/app/oracle/oradata/tmp/CDB1/datafile/o1_mf_users_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u01/app/oracle/oradata/fra/CDB1/backupset/2014_04_19/o1_mf_nnndf_TAG20140419T103559_9o3r8hgk_.bkp
channel ORA_AUX_DISK_1: piece handle=/u01/app/oracle/oradata/fra/CDB1/backupset/2014_04_19/o1_mf_nnndf_TAG20140419T103559_9o3r8hgk_.bkp tag=TAG20140419T103559
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:01:36
Finished restore at 20-APR-14

datafile 1 switched to datafile copy
input datafile copy RECID=9 STAMP=845367911 file name=/u01/app/oracle/oradata/tmp/CDB1/datafile/o1_mf_system_9o64w899_.dbf
datafile 5 switched to datafile copy
input datafile copy RECID=10 STAMP=845367911 file name=/u01/app/oracle/oradata/tmp/CDB1/datafile/o1_mf_undotbs_9o64w8n2_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=11 STAMP=845367911 file name=/u01/app/oracle/oradata/tmp/CDB1/datafile/o1_mf_sysaux_9o64w8nc_.dbf
datafile 6 switched to datafile copy
input datafile copy RECID=12 STAMP=845367911 file name=/u01/app/oracle/oradata/tmp/CDB1/datafile/o1_mf_users_9o64w8qj_.dbf

4) Taking all clone datafiles online to enable media recovery, and performing media recovery, followed by auxiliary instance drop at the end.


contents of Memory Script:
{
# set requested point in time
set until  scn 1522385;
# online the datafiles restored or switched
sql clone "alter database datafile  1 online";
sql clone "alter database datafile  5 online";
sql clone "alter database datafile  3 online";
sql clone 'SALESPDB' "alter database datafile 
 11 online";
sql clone 'SALESPDB' "alter database datafile 
 12 online";
sql clone 'SALESPDB' "alter database datafile 
 13 online";
sql clone 'SALESPDB' "alter database datafile 
 14 online";
sql clone "alter database datafile  6 online";
# recover pdb
recover clone database tablespace  "SYSTEM", "UNDOTBS", "SYSAUX", "USERS" pluggable database 
 'SALESPDB'   delete archivelog;
sql clone 'alter database open read only';
plsql <<<begin
   add_dropped_ts;
end; >>>;
plsql <<<begin
   save_pdb_clean_scn;
end; >>>;
# shutdown clone before import
shutdown clone abort
plsql <<<begin
   pdbpitr_inspect(pdbname =>  'SALESPDB');
end; >>>;
}
executing Memory Script

executing command: SET until clause

sql statement: alter database datafile  1 online

sql statement: alter database datafile  5 online

sql statement: alter database datafile  3 online

sql statement: alter database datafile  11 online

sql statement: alter database datafile  12 online

sql statement: alter database datafile  13 online

sql statement: alter database datafile  14 online

sql statement: alter database datafile  6 online

Starting recover at 20-APR-14
using channel ORA_AUX_DISK_1

starting media recovery

archived log for thread 1 with sequence 47 is already on disk as file /u01/app/oracle/oradata/fra/CDB1/archivelog/2014_04_19/o1_mf_1_47_9o3rd8fo_.arc
archived log for thread 1 with sequence 48 is already on disk as file /u01/app/oracle/oradata/fra/CDB1/archivelog/2014_04_19/o1_mf_1_48_9o3tjwyy_.arc
archived log for thread 1 with sequence 49 is already on disk as file /u01/app/oracle/oradata/fra/CDB1/archivelog/2014_04_20/o1_mf_1_49_9o60dlml_.arc
archived log for thread 1 with sequence 50 is already on disk as file /u01/app/oracle/oradata/fra/CDB1/archivelog/2014_04_20/o1_mf_1_50_9o60rgmt_.arc
archived log file name=/u01/app/oracle/oradata/fra/CDB1/archivelog/2014_04_19/o1_mf_1_47_9o3rd8fo_.arc thread=1 sequence=47
archived log file name=/u01/app/oracle/oradata/fra/CDB1/archivelog/2014_04_19/o1_mf_1_48_9o3tjwyy_.arc thread=1 sequence=48
archived log file name=/u01/app/oracle/oradata/fra/CDB1/archivelog/2014_04_20/o1_mf_1_49_9o60dlml_.arc thread=1 sequence=49
archived log file name=/u01/app/oracle/oradata/fra/CDB1/archivelog/2014_04_20/o1_mf_1_50_9o60rgmt_.arc thread=1 sequence=50
media recovery complete, elapsed time: 00:00:18
Finished recover at 20-APR-14

sql statement: alter database open read only



Oracle instance shut down


Removing automatic instance
Automatic instance removed
auxiliary instance file /u01/app/oracle/oradata/tmp/CDB1/datafile/o1_mf_sysaux_9o64w8nc_.dbf deleted
auxiliary instance file /u01/app/oracle/oradata/tmp/CDB1/controlfile/o1_mf_9o64vdxg_.ctl deleted
Finished recover at 20-APR-14

5) Opening the newly recovered pdb from SqlPlus and checking the sales.sales table.


SQL> alter pluggable database salespdb open resetlogs;

Pluggable database altered.

SQL> select count(*) from sales;

  COUNT(*)
----------
        89

Conclusion:

The point-in-time restore of a pluggable database is similar to the point-in-time restore of a noncdb.
But the point-in-time recovery of a pluggable database is also a bit different because it requires some datafiles from the container database. Other pluggable databases can continue to operate and be available while the recovery is in progress and after it also.