Saturday 29 November 2014

Privilege analysis - An example of using the GRANT PATH information



The Oracle Database 12c privilege analysis enables the security administrator to analyze the path by which access was gained to a table or object of interest. This is called the GRANT PATH, which can be difficult to analyze in complex cases without privilege analysis.


Consider a case where a user hr has indirectly gained access to the SCOTT.EMPLOYEES table. This user gained access when he received a grant to a role named HR_USER. And the HR_USER in turn role has received a grant of the role named HR_ADMIN. The HR_ADMIN has directly received a grant of SELECT on SCOTT.EMPLOYEES.

The 'grant path' is HR (user) --> HR_USER (role) --> HR_ADMIN (role)


username     :  hr
Role granted to hr : hr_user
Role granted to hr_user : hr_admin
Role granted to hr_admin : select on scott.employees

The information about this chain of grants is available in a capture, once generated, via the DBA_USED_OBJPRIVS_PATH/DBA_USED_SYSPRIVS_PATH view. The example below shows an example of retrieving the GRANT PATH .

Setting up the roles, creating the HR user and GRANTing the user the roles.

SQL> create role hr_admin;

Role created.

SQL> grant select on scott.employees to hr_admin;

Grant succeeded.

SQL> create role hr_user;

Role created.

SQL> grant hr_admin to hr_user;

Grant succeeded.

SQL> create user hr identified by oracle
 profile default default tablespace users quota unlimited on users;
  2 
User created.

SQL>  grant create session to hr;

Grant succeeded.

SQL>  grant hr_user to hr;

Grant succeeded.


Setting up the privilege capture…
The capture will be a role capture type and capture the usage of the role HR_USER.

SQL> begin
 dbms_privilege_capture.create_capture(name => 'CAPTURE_TRIAL',
  description => 'TRIAL',
  type => dbms_privilege_capture.g_role,
  roles => role_name_list('HR_USER'));
end;
  2    3    4    5    6    7  /

PL/SQL procedure successfully completed.


Enabling the capture…

SQL> begin
dbms_privilege_capture.enable_capture(name => 'CAPTURE_TRIAL');
end;
  2    3    4  /

PL/SQL procedure successfully completed.

Exercising the privilege… as the new HR user…

SQL> conn hr/oracle@localhost:1521/noncdb1
Connected.
SQL> select * from scott.employees
  2   fetch first 5 rows only;

    EMP_ID NAME                 JOIN_DATE        MGR    DEPT_ID
---------- -------------------- --------- ---------- ----------
         1 emp0000001           28-NOV-14        123        100
         2 emp0000002           28-NOV-14        123        100
         3 emp0000003           28-NOV-14        123        100
         4 emp0000004           28-NOV-14        123        100
         5 emp0000005           28-NOV-14        123        100

SQL>

Stopping the privilege capture and collecting its results…
SQL> conn / as sysdba
Connected.
SQL>
SQL> begin
  2     dbms_privilege_capture.disable_capture('CAPTURE_TRIAL');
  3     dbms_privilege_capture.generate_result('CAPTURE_TRIAL');
  4  end;
  5  /

PL/SQL procedure successfully completed.
Querying the capture results…
The dba_used_objprivs_path will have the grant path details for object privilege usage.

SQL> ed
Wrote file afiedt.buf

  1* select capture, used_role, obj_priv, object_owner, object_NAME,  path from dba_used_objprivs_path
SQL> /

CAPTURE         USED_ROLE  OBJ_PRIV   OBJ_OWNER  OBJECT_NAME     PATH
--------------- ---------- ---------- ---------- --------------- ----------------------------------------
CAPTURE_TRIAL   HR_ADMIN   SELECT     SCOTT      EMPLOYEES       GRANT_PATH('HR', 'HR_USER', 'HR_ADMIN')




The PATH column is actually a VARRAY collection and can be extracted using pl/sql script as shown

SQL> ed
Wrote file afiedt.buf

  1  declare
  2   v_grant_path GRANT_PATH;
  3   v_index varchar(128);
  4  begin
  5   select path into v_grant_path from dba_used_objprivs_path where sequence=1;
  6   v_index :=v_grant_path.first();
  7   while v_index is not null
  8   loop
  9   dbms_output.put_line(v_grant_path(v_index));
 10   v_index :=v_grant_path.next(v_index);
 11   end loop;
 12* end;
SQL> /
HR
HR_USER
HR_ADMIN


The grant path information allows the administrator to trace the trail of grants by which a user received access to a table of interest. In more complex, real-world cases, this would have been difficult to analyze without the aid of the privilege capture.

Friday 28 November 2014

ILM in 12c - ADO (automatic data optimization)



In Oracle Database 12c, it is possible to define a policy, which specifies how/where data should be managed and stored as it ages, such policies are known as ILM ADO policies.

Automatic Data Optimization (ADO):
Note: ADO and HEAT MAP are not supported for multitenant container databases.

Oracle Database 12c introduces a new, policy driven way to manage long term data storage and retention needs. In an Oracle 12c (noncdb) database, a dba can define a policy that specifies

  • How data is stored – by specifying the type of compression used to store it.
  • Where it gets stored - by specifying a tablespace to move older data to.
  • When it gets moved – 
    • by requiring a period of low or no access to that data to elapse, before the move.
    • by further qualifying the condition using a boolean condition (pl/sql function).

For example:
An rule can be defined on a table stating that

§        Rows which are rarely accessed should be compressed after a certain age (COMPRESSION TIERING).
§        Rows should be moved to bulk storage after it reaches a certain age (STORAGE TIERING).
§        Rows should be treated using a combination of the above (COMPRESSION + STORAGE TIERING)

This is made possible by a new feature called HEAT MAP.  When enabled, the database tracks the types of table data access.

The type of the information tracked are
  • Date/times of access
  • Nature of access - read or write
  • Access path - full scan, or an index probe.

This metadata can be queried through various *_HEAT_MAP_SEGMENT and *_HEAT_MAP_HISTOGRAM views.

Using the accumulated heat map metadata, Oracle will evaluate and enforce the ILM rules defined - these rules are known as ILM ADO policies. The policies are automatically evaluated, and system jobs are auto-spawned to treat the data according to the policy's specification. This capability is called Automatic Data Optimization (ADO).

ADO policies can be defined at different levels of granularity

The table below gives an idea.

Level
Description
Syntax example
Tablespace
Applies to all tables in a specific tablespace.

ALTER TABLESPACE sales DEFAULT
  ROW STORE COMPRESS BASIC
  SEGMENT AFTER 6 MONTHS OF NO MODIFICATION;

Segment   
Applies to a specific segment only.  (useful for partitioned tables)

ALTER TABLE sales_ado ILM ADD POLICY
  COMPRESS FOR ARCHIVE HIGH SEGMENT
  AFTER 6 MONTHS OF NO MODIFICATION;

Row
Applies to specific rows in a table.

ALTER TABLE sales MODIFY PARTITION sales_q1_2002
  ILM ADD POLICY ROW STORE COMPRESS ADVANCED ROW
  AFTER 30 DAYS OF NO MODIFICATION;




ADO policy evaluation and enforcement


While the ADO environment is self-managing and does its actions in maintenance windows, a couple of packages are useful to the administrator who wishes to do an ad hoc inspection or wants to tweak the ADO environment.


Procedure call
Useful for

dbms_ilm_admin.CUSTOMIZE_ILM


Customization of ILM policy evaluation to adjust and policies are evaluated.

dbms_ilm.PREVIEW_ILM


Evaluation of an ILM policy to see if its triggering conditions are met.


Dbms_ilm.EXECUTE_ILM


To execute an ADO task.



Example of ILM (storage tiering):

Reference: Oracle 12c obe

A storage tiering policy specifies that a table’s data will be migrated to a named tablespace, when the amount of free space in the containing tablespace goes down below a threshold (due to data growth).

We will create a test table and attach an ILM policy to it, pump it with data, and observe how ILM treats it in response to its growth.

About configuring the ILM environment for testing
The threshold at which an ILM action is launched in response to free space reduction can be customized, using the DBMS_ILM_ADMIN package (dbms_ilm_admin.customize_ilm).

We will configure the ILM environment such that ILM policy that we define will be triggered as soon as the amount of free space in the tablespace becomes less than or equal to 95% (meaning 5% or more of the tablespace’s available space is already used up).


SQL> begin
  2   dbms_ilm_admin.customize_ilm(parameter => dbms_ilm_admin.TBS_PERCENT_USED,value => 5);
  3   dbms_ilm_admin.customize_ilm(parameter => dbms_ilm_admin.TBS_PERCENT_FREE,value =>  95);
  4  end;
  5  /

PL/SQL procedure successfully completed


This sets up the ilm threshold to an suitably low value so that the ilm policies and actions trigger without having to pump a lot of data into the test table.

Further, we will use a test tablespace called users, which have only 5 MB of data and cannot auto extend.

Creating a test table and populating it with bulk data:

create table employees (
  emp_id number, name varchar(
20),
  join_date date, mgr number, dept_id number)
 tablespace users;

Inserting about 20,000 records in it, to use up about all the space in the users tablespace.

SQL> insert into employees
  2   select rownum, 'emp' || ltrim(to_char(rownum,'0999999')) , sysdate, 123, 100 from dual
  3   connect by level<=20000
  4  ;

20000 rows inserted

SQL> commit;

Commit complete

Then it is necessary to attach a storage-tiering policy to the table. The policy will specify that the table’s rows must be moved to a different tablespace named LOW_COST_STORE (when the tablespace usage reaches the thresholds set up for ilm as mentioned above).

SQL> alter table employees
  2   ilm add policy
  3   tier to low_cost_store;

Table altered.

Now, it is time for the database to evaluate the policy, which we will do (manually in this case) with this script (as the scott user). Normally the database would evaluate the policies in the maintenance windows.

The script example below, invokes the dbms_ilm.preview_ilm procedure, which will evaluate all the ADO policies in the current schema, and creates a task to enforce it (but in the inactive state). The user may execute the created task immediately with a script (or else it would execute in a maintenance window).


SQL> set serveroutput on
SQL>
SQL> declare
  2   v_task_id number:=0;
  3  BEGIN
  4   dbms_ilm.preview_ilm(task_id => v_task_id,ilm_scope => dbms_ilm.SCOPE_SCHEMA);
  5   dbms_output.put_line('task id = ' || v_task_id);
  6   if v_task_id is null then
  7      dbms_output.put_line('task id is null');
  8   end if;
  9  END;
 10  /

task id =
task id is null

PL/SQL procedure successfully completed

However, the task id that it is supposed to return to the caller is NULL (I do not know why).

So I query the user_ilmtasks view to see the task that it created, and I retrieve its id from there (task id = 128).


SQL> select task_id, policy_name, object_owner, object_name, selected_for_execution
  from user_ilmevaluationdetails order by task_id desc  ;
  2 
TASK_ID POLICY_NAM OBJECT_OWNER    OBJECT_NAME     SELECTED_FOR_EXECUTION
------- ---------- --------------- --------------- ------------------------------
    128 P203       SCOTT           EMPLOYEES       SELECTED FOR EXECUTION

SQL>

Then query the user_ilmtasks to examine the task status (INACTIVE)

SQL> select task_id, state, to_char(creation_time,'dd-mon-yyyy hh24:mi:ss') creation_time
     from user_ilmtasks where task_id=128;
  2 
TASK_ID STATE     CREATION_TIME
------- --------- --------------------
    128 INACTIVE  27-nov-2014 13:17:52




Finally we can execute the ilm task using a script. The dbms_ilm.execute_ilm_task executes a previously evaluated ADO task. In the example below, I supply the previously retrieved task id to the dbms_ilm.execute_task procedure.

SQL> declare
 v_task_id number:=128;
begin
 dbms_ilm.execute_ilm_task(task_id => v_task_id,
  execution_mode => dbms_ilm.ILM_EXECUTION_ONLINE,
  execution_schedule => dbms_ilm.SCHEDULE_IMMEDIATE);
end;
  2    3    4    5    6    7    8  /

PL/SQL procedure successfully completed.


The results of the task execution are available in the user_ilmresults

SQL> select task_id, job_name, job_state,
to_char(start_time,'dd-mon-yyyy hh24:mi:ss') start_time,
to_char(completion_time,'dd-mon-yyyy hh24:mi:ss') completion_time
from user_ilmresults where task_id=128;  2    3    4 

TASK_ID JOB_NAME        JOB_STATE                 START_TIME           COMPLETION_TIME
------- --------------- ------------------------- -------------------- --------------------
    128 ILMJOB318       COMPLETED SUCCESSFULLY    27-nov-2014 13:19:29 27-nov-2014 13:19:29


After the task completed, we can see that the table employees (which was originally in the USERS tablespace) is now contained in the LOW_COST_STORE tablespace.

SQL>  select tablespace_name from user_tables where table_name='EMPLOYEES';

TABLESPACE_NAME
------------------------------
LOW_COST_STORE

This was an example of storage tiering where the database has moved the table data to a different class of storage (presumably low-cost, bulk storage) in response to a space pressure condition.

Example of ILM (compression tiering):
In case of compression tiering, specific rows of data (with different ages) are compressed in-place, when policy-specified conditions are satisfied, like a certain amount of time has passed since those rows were accessed or modified. Older rarely accessed rows can be compressed to a greater degree to save space, whereas newer rows may not be compressed so that they can be accessed without having to de-compress them.

An example of a compression tiering policy would typicially specify at the row level, for example

Alter table employees
 ilm add policy
  row store compress advanced  row
   after 10 days of no modification;

This is a row level ado policy, which means the policy will continue to apply to future rows so that they may be compressed when they become eligible. Further, as per the Oracle 12c obe page, for a table with a row level policy, compression can take place only if all the rows in a block satisfy the policy criteria.

On the other hand, a segment level policy applies to the entire segment, and will become automatically disabled after it executes at least once. Segment level compression tiering policies are meant for partitioned tables.

Demonstrating below a row level compression tiering policy

Re-creating the same employee table…

SQL>
SQL> drop table employees;

Table dropped

SQL> create table employees (
  2    emp_id number, name varchar(20),
  3    join_date date, mgr number, dept_id number)
  4  /

Table created

SQL>
SQL>  insert /*+ APPEND */into employees
  2       select rownum, 'emp' || ltrim(to_char(rownum,'0999999')) , sysdate, 123, 100 from dual
  3       connect by level<=20000;

20000 rows inserted

SQL>      commit;

Commit complete

SQL>


Attaching the row level policy to the table…

SQL>
SQL> Alter table employees
  2  Ilm add policy
  3  Row store compress advanced
  4  row after 10 days of no modification;

Table altered

Flushing the heat map metadata to disk :

SQL>
SQL> begin
  2   dbms_ilm.flush_all_segments;
  3   end;
  4  /

PL/SQL procedure successfully completed.

Evaluating the ILM policy…



SQL> set serveroutput on
SQL>
SQL> declare
  2       v_task_id number:=0;
  3      BEGIN
  4       dbms_ilm.preview_ilm(task_id => v_task_id,ilm_scope => dbms_ilm.SCOPE_SCHEMA);
  5       dbms_output.put_line('task id = ' || v_task_id);
  6       if v_task_id is null then
  7          dbms_output.put_line('task id is null');
  8       end if;
  9  END;
 10  /

task id =
task id is null

PL/SQL procedure successfully completed


Retrieving the task id…

SQL>
SQL> select task_id, policy_name, object_owner, object_name, selected_for_execution
  2    from user_ilmevaluationdetails order by task_id desc  ;

   TASK_ID POLICY_NAME                                                                      OBJECT_OWNER                                                                     OBJECT_NAME                                                                      SELECTED_FOR_EXECUTION
---------- -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- ------------------------------------------
       164 P305                                                                             SCOTT                                                                            EMPLOYEES                                                                        SELECTED FOR EXECUTION



Executing the ilm task with id=164

SQL> declare
  2   v_task_id number:=164;
  3  begin
  4   dbms_ilm.execute_ilm_task(task_id => v_task_id,
  5    execution_mode => dbms_ilm.ILM_EXECUTION_ONLINE,
  6    execution_schedule => dbms_ilm.SCHEDULE_IMMEDIATE);
  7  end;
  8  /

PL/SQL procedure successfully completed

Gathering optimizer stats (to collect metadata into user_tables)

SQL> BEGIN
  2   DBMS_STATS.GATHER_TABLE_STATS('SCOTT','EMPLOYEES');
  3   END;
  4  /

PL/SQL procedure successfully completed

Analyzing the compression stats for the table’s rows… (as per Oracle docs, compression_type=4096 is basic compression, compression_type=1 is No compression ref DBMS_COMPRESSION).

I am not getting advanced compression; and do not know the reason at this time, but compression tiering is demonstrated here.

SQL> select compression_type, COUNT(*)
  2    from (select dbms_compression.get_compression_type(ownname => 'SCOTT',
  3                                                       tabname => 'EMPLOYEES',
  4                                                       row_id  => t.rowid) compression_type
  5            from scott.employees t)
  6   GROUP BY COMPRESSION_TYPE
  7  /

COMPRESSION_TYPE   COUNT(*)
---------------- ----------
               1         56
            4096      19944



About 56 rows are non-compressed, whereas about 19,944 rows are advanced-compressed.
If more rows were to be added, they’d be stored  uncompressed, until the ADO policy kicks in again.

Adding a few more rows…

insert              into employees
     select rownum,
'emp' || ltrim(to_char(rownum,'0999999')) , sysdate, 123, 100 from dual
     connect by level<=
500;


Checking the compression statistics again…

SQL> select compression_type, COUNT(*)
  from (select dbms_compression.get_compression_type(ownname => 'SCOTT',
                                                     tabname => 'EMPLOYEES',
                                                     row_id  => t.rowid) compression_type
          from scott.employees t)
 GROUP BY COMPRESSION_TYPE  2    3    4    5    6  ;

COMPRESSION_TYPE   COUNT(*)
---------------- ----------
               1        556
            4096      19944

Again this table has few rows uncompressed whereas others are compressed.

Thursday 6 November 2014

Converting a database from Unix to Windows using RMAN

Transporting an entire database from one platform to another using RMAN

Transportable Database is an extension of the transportable tablespace feature. Transporting tablespaces is faster than doing data pump export/import because the data copied instead of being pumped through the SQL engine.

The Oracle Transportable Database feature uses RMAN using the CONVERT DATABASE command.

§        This form of the command requires the source database to be in READ ONLY mode.
§        User-created tablespaces are converted to the destination platform by RMAN, and written to the destination specified.  
§        Oracle supplied tablespaces like SYSTEM, SYSAUX, etc are written out as a Data Pump .DMP file.  
§        The CONVERT DATABASE command will script out a text parameter file and a transport script, which will be needed to mount the new database and complete the migration at its destination platform.
§        The pfile and transport script will need edits to parameters and path names to to suit the destination.


Below is an example of conversion of a 12c NONCDB database named NONCDB1 from unix 64 bit to Windows 64 bit.

Source
Destination
OS Platform
Linux 64-bit, little endian
Windows 64-bit, little endian
Datafile location
/u01/app/oracle/oradata/noncdb1
/u01/app/oracle/oradata/win64/noncdb1
Convert script location
n.a
n.a
Transport script location
/home/oracle/transport.sql
n.a

Note:
The source database must be opened read only during the conversion.

Pre-transport checking of the source database:
Before doing a transport operation, it is necessary to check the source database to ensure that its indeed possible to transport it to the target platform, and that there are no incompatibilities or conditions that will prevent the transport. This check is provided by the DBMS_TDB package, procedure call is check_db.

This procedure accepts 2 arguments
Argument
Purpose
target_platform_name
The name of the target platform as it appears in v$transportable_platform.platform_name.
skip_option
Allows skipping of specific types of tablespaces. Supported values
SKIP_OFFLINE – skips offline tablespaces
SKIP_READONLY – skips read-only.
SKIP_NONE – checks all tablespaces.

Below the check returns successfully and prints the ‘tdb check ok’ message, so this database is ready for conversion

  1  declare
  2   tdb_check boolean;
  3  begin
  4    tdb_check:=dbms_tdb.check_db('Microsoft Windows IA (64-bit)',dbms_tdb.SKIP_NONE);
  5    if (tdb_check) then
  6       dbms_output.put_line('tdb check ok');
  7    else
  8       dbms_output.put_line('tdb check not ok');
  9    end if;
 10* end;
SQL> /
tdb check ok


PL/SQL procedure successfully completed.


Convert database clauses


Clause
Purpose
CONVERT DATABASE
Performs the conversion of the target.
    New database ‘newdb’
Sets the name of the new database.
    Transport script ‘/home/oracle/transport.sql’
Name of the transport script (basically a create controlfile script, because controlfile cannot be transported to another platform)
    To platform ‘Microsoft Windows IA (64-bit)’
Identifies the target platform for conversion
    Db_file_name_convert=’source-path’,destn-path’
Identifies the source datafiles and destination file path

In the example below, after connecting to rman, the convert database command is issued and RMAN proceeds to convert the database. Because ON DESTINATION PALTFORM is not specified, the conversion is performed at-source.


Recovery Manager: Release 12.1.0.2.0 - Production on Wed Oct 29 16:01:45 2014

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: NONCDB1 (DBID=1929467856)

RMAN> convert database
  new database 'newdb'
  transport script '/home/oracle/transport.sql'
  to platform 'Microsoft Windows IA (64-bit)'
db_file_name_convert='/u01/app/oracle/oradata/noncdb1','/u01/app/oracle/oradata/win64/noncdb1';
2> 3> 4> 5>
Starting conversion at source at 28-OCT-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=11 device type=DISK

External table SYS.OPATCH_XML_INV found in the database

Directory SYS.XSDDIR found in the database
Directory SYS.DATA_PUMP_DIR found in the database
Directory SYS.OPATCH_INST_DIR found in the database
Directory SYS.OPATCH_SCRIPT_DIR found in the database
Directory SYS.OPATCH_LOG_DIR found in the database
Directory SYS.ORACLE_BASE found in the database
Directory SYS.ORACLE_HOME found in the database

User SYS with SYSDBA and SYSOPER privilege found in password file
channel ORA_DISK_1: starting datafile conversion
input datafile file number=00001 name=/u01/app/oracle/oradata/noncdb1/system.ora
converted datafile=/u01/app/oracle/oradata/win64/noncdb1/system.ora
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:45
channel ORA_DISK_1: starting datafile conversion
input datafile file number=00003 name=/u01/app/oracle/oradata/noncdb1/undo.ora
        converted datafile=/u01/app/oracle/oradata/win64/noncdb1/undo.ora
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting datafile conversion
input datafile file number=00002 name=/u01/app/oracle/oradata/noncdb1/sysaux.ora
converted datafile=/u01/app/oracle/oradata/win64/noncdb1/sysaux.ora
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting datafile conversion                                    
input datafile file number=00004 name=/u01/app/oracle/oradata/noncdb1/users.ora
converted datafile=/u01/app/oracle/oradata/win64/noncdb1/users.ora
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:03

Edit init.ora file /u01/app/oracle/product/12.1.0/dbhome_2/dbs/init_00pm746n_1_0.ora. This PFILE will be used to create the database on the target platform
Run SQL script /home/oracle/transport.sql on the target platform to create database
To recompile all PL/SQL modules, run utlirp.sql and utlrp.sql on the target platform
To change the internal database identifier, use DBNEWID Utility
Finished conversion at source at 28-OCT-14

The output has some post-conversion steps with instructions, shown in bold above

RMAN has written out a temp parmeter file (init_00pm746n_1_0.ora ) to start up the new instance in the new platform.
Also RMAN has written out transport script (transport.sql) which has commands to create the controlfile on the new destination platform, mount, and recompile it. The steps to be executed are shown below.

  1. Edit the dummy pfile that it has created  in order to make it work on destination platform. The screenshot below shows the pfile with instructions to review/change the parameters.



In this example, after the pfile has been edited as described above, and the changed line looks like this:


  Db_name=newdb
  Compatible=12.1.0.0.0
  control_files            = 'c:\temp\control1.ctl','c:\temp\control2.ctl'
  sga_target=1000m
  pga_aggregate_target=900m


  1. It has created a transport script, which has a create controlfile statement and commands to start up the database and recompile its modules. The transport script must be run against the converted databse after editing the pfile mentioned above and using it to start the database instance. Before running the transport script the unix-style paths in the CREATE CONTROLFILE statement must be edited to match the destination platform. The main parameter is the control_files parameter which sets the stage for creation of the control file. Also the parameter file needs to be edited to allocate sufficient sga/pga memory, otherwise the instance wont come up.

 

Since it’s a Windows host, a Windows service needs to be defined on the Windows destination using oradim, as shown.



After the service is created the next step is to start up the instance in nomount mode and create the controlfile


SQL> startup nomount pfile='init.ora';
ORACLE instance started.

Total System Global Area 1048576000 bytes
Fixed Size                  3053584 bytes
Variable Size             222300144 bytes
Database Buffers          813694976 bytes
Redo Buffers                9527296 bytes



Having started up the instance, the controlfile needs to be created. The CREATE CONTROLFILE from the transport script created by RMAN needs to be edited (to modify the file locations from unix to windows paths). Then the control file can be created and that statement will automatically mount the database.


SQL> CREATE CONTROLFILE REUSE SET DATABASE "NEWDB" RESETLOGS  NOARCHIVELOG
  2      MAXLOGFILES 6
  3      MAXLOGMEMBERS 2
  4      MAXDATAFILES 100
  5      MAXINSTANCES 2
  6      MAXLOGHISTORY 292
  7  LOGFILE
  8    GROUP 1 'c:\temp\newdb\redo1.ora'  SIZE 20M ,
  9    GROUP 2 'c:\temp\newdb\redo2.ora'  SIZE 20M
 10  DATAFILE
 11    'c:\temp\newdb\system.ora',
 12    'c:\temp\newdb\sysaux.ora',
 13    'c:\temp\newdb\undo.ora',
 14    'c:\temp\newdb\users.ora'
 15  CHARACTER SET US7ASCII
 16  ;

Control file created.


There are a few last steps in the transport script after the control file is created.
  • Edit the scripts as required for the Windows environment.







After controlfile creation, the script opens the database with the resetlogs and upgrade option.
Then it adds a tempfile for the default temporary tablespace.  Restart in upgrade mode and performs invalidation of database objects using utlirp. Then there is a final restart and compilation in normal mode using utlrp.sql.

After utlrp.sql finishes execution, the database is ready. If needed, the nid tool can be used to assign a new DBID to the database.