Monday 21 July 2014

Creating a pluggable database in Oracle 12c.

Creating a pluggable database in Oracle Database 12c.

Oracle database 12c introduced the concept of a pluggable database, which can be plugged into a multi tenant container database; unplugged when required, transported and re-plugged into a different container database.

With the introduction of pluggable databases, database creation has become a mostly fast and efficient operation. The following methods exist to create a pluggable database.

  1. Create a pluggable database using the seed container. See Example 1 below.
The seed data files are copied to the new location and used by the new database. The catalog scripts, etc do not need to be executed afresh.

  1. Create a pluggable database by duplicating an existing one. See Example 2 below.
The files from the existing database are copied to the location of the new database. The new database automatically gets assigned new pdb id to distinguish it from its parent.

  1. Create a pluggable database by plugging it in. See Example 3 below.
Because the pluggable database is self-contained, it can be detached from its container and transported to a different container. The action of detaching it is called an unplug operation. The unplug operation results in a .XML file (called a manifest file) which describes the database and has the information necessary for a re-attach (including pdb-specific initialization parameters). This offers another quick method of creating a pluggable database.

  1. Creating a pluggable database from a noncdb source database. See Example # 4 below.
A database created as a non-pluggable database is known as a noncdb. A noncdb is similar to pre-12c databases.

The first step would be to open the noncdb read-only, and generate a self-describing XML file, known as a manifest file. This manifest file can be created using the DBMS_PDB.DESCRIBE procedure.  Having created the manifest file, the database can be plugged into the CDB by either copying the source datafiles or using them in-place.

  1. Creating a pluggable database using Oracle Goldengate
This method uses database replication technology. In this technique, Oracle Goldengate software is configured to capture changes occurring at a source database, and to transmit and apply it at an Oracle 12c PDB destination, until the destination it is fully synchronized.

While this technique requires significant investment and is complex, it offers fault tolerance, availability, and flexibility.

The main requirement imposed on the production database by GoldenGate is that supplemental logging needs to be enabled at the source database for the captured table/columns.



Example  1 - Create a pdb using the seed

To create a pluggable database using the seed use the CREATE PLUGGABLE DATABASE and specify a file_name_convert parameter as shown.

Pre-requisite
  1. Connect to root with CREATE PLUGGABLE DATBASE privileges.
  2. Configure a storage location (and ensure disk space) for the new database’s files.
  3. Select a unique name for the new database.


SQL> create pluggable database PDBTEST
  2   admin user pdbdba identified by oracle
  3   file_name_convert=('/u01/app/oracle/oradata/cdb1/seed',
  4                      '/u01/app/oracle/oradata/cdb1/pdbtest')
  5   ;

Pluggable database created.


After the database is created, it is in the NEW state. To complete the database creation, it is necessary to open it read-write at least once, as shown.

SQL> alter pluggable database pdbtest open read write;

Pluggable database altered.

SQL> column pdb_name format a20
SQL> column status format a20
SQL> select pdb_name,status from cdb_pdbs;

PDB_NAME             STATUS
-------------------- --------------------
PDB$SEED             NORMAL
HRPDB                NORMAL
SALESPDB             NORMAL
PDBTEST              NORMAL


Listing the datafiles of the new database, we see that it has its own system, sysaux and default user tablespace; the undo tablespace is always shared with the CDB.

SQL> alter session set container=pdbtest;   

Session altered.

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/cdb1/root/undo.ora
/u01/app/oracle/oradata/cdb1/pdbtest/system.ora
/u01/app/oracle/oradata/cdb1/pdbtest/sysaux.ora
/u01/app/oracle/oradata/cdb1/pdbtest/users.ora


Example  2 - Create a pdb by duplicating an existing one

An existing pluggable database can be rapidly duplicated by using the CREATE PLUGGABLE DATABASE FROM <existing-pdb>.

Pre-requisite
  1. Connect to root with CREATE PLUGGABLE DATBASE privileges.
  2. Configure a storage location (and ensure disk space) for the new database’s files.
  3. Select a unique name for the new database.
  4. Place the source pluggable database in read-only mode. For this example we will create a copy of the HRPDB database.

SQL> select name from v$pdbs;

NAME
------------------------------
PDB$SEED
HRPDB
SALESPDB
PDBTEST

SQL> alter pluggable database hrpdb close;

Pluggable database altered.

SQL> alter pluggable database hrpdb open read only;

Pluggable database altered.


  1. Issue the CREATE PLUGGABLE DATABASE … FROM statement.
SQL> create pluggable database HRCOPY
  2   FROM HRPDB
  3   file_name_convert=('/u01/app/oracle/oradata/cdb1/hrpdb',
  4                      '/u01/app/oracle/oradata/cdb1/hrcopy')
  5  ;

Pluggable database created.


  1. Open it at least once in read-write mode to complete the integration with the CDB.

SQL> alter pluggable database hrcopy open read write;



Example  3 - Create a pdb by a plug-in operation

It is possible to unplug a pluggable database from container A and plug it into container B.
The mechanism for plugging in is by creating a .XML file which describes the metadata of the unplugged database, so that it can be read and processed by the new container.

Overview of steps
  1. First configure adequate storage at the site which will soon have a new PDB.
  2. Then, unplug the source database by executing an ALTER PLUGGABLE DATABASE <plug1> unplug into <xml-file>
  3. This will create a .XML file and detach the pluggable database.
  4. The datafiles belonging to the database along with the XML can be transported to the new site and a simple CREATE PLUGGABLE DATABASE <plug2> using <xml> will accomplish the job.

Example:
There is a pluggable databse called PDB1 in the container database CDB2. The objective is to unplug this database from CDB2, and plug it into the container database CDB1, thus creating a new database in CDB1 using the unplugged database.

First configure storage in the site which has the CDB1 container.
To prepare for plug-in create the folders in the CDB1 container site called newcdb
to house the datafiles.


[oracle@laboms ~]$ mkdir -p /u01/app/oracle/oradata/cdb1/newcdb



Then, make the target pdb is closed consistently, and unplug it.

[oracle@laboms oradata]$ sqlplus sys/oracle@localhost:1521/CDB2 as sysdba
SQL> alter pluggable database pdb1 close;

Pluggable database altered.

SQL> alter pluggable database pdb1    
  2   unplug into '/home/oracle/pdb1_mdata.xml';

Pluggable database altered.


Transport the pdb1_mdata.xml and the datafiles belonging to PDB1 to the CDB1 site.
The datafiles are:
  1. /u01/app/oracle/oradata/cdb2/pdb1/system.ora
  2. /u01/app/oracle/oradata/cdb2/pdb1/sysaux.ora
  3. /u01/app/oracle/oradata/cdb2/pdb1/users.ora

These paths will change as highlighted in bold after plug-in.

  1. /u01/app/oracle/oradata/cdb1/newpdb/system.ora
  2. /u01/app/oracle/oradata/cdb1/newpdb/sysaux.ora
  3. /u01/app/oracle/oradata/cdb1/newpdb/users.ora


Next, plug-in the database, as shown.



SQL> create pluggable database newpdb                                   
  2    using '/home/oracle/pdb1_mdata.xml'
  3   file_name_convert=('/u01/app/oracle/oradata/cdb2/pdb1',
  4              '/u01/app/oracle/oradata/cdb1/newpdb')
  5  storage  (maxsize 1g max_shared_temp_size 100m);

Pluggable database created.

Note:
§        file_name_convert parameter is required to tell Oracle how to translate paths in the .XML such as:
    <file>
      <path>/u01/app/oracle/oradata/cdb2/pdb1/seed/users.ora</path>
      <afn>22</afn>
      <rfn>7</rfn>

to the destination path.

§        The move keyword specifies that source datafile are moved to the destination, not copied.

§        If the source datafiles are not accessible from CDB1 site, and they have been copied to a staging location like ‘/d01/stage’ then the source_file_name_convert parameter becomes necessary to translate xml paths to location of the staging directory so that the files can be copied/moved to their destination in CDB1.

Finally, open the database read-write once to complete the operation.
SQL> alter pluggable database newpdb open read write;

Pluggable database altered.

Example  4 - Create a pdb using a noncdb

Support for integration into a CDB is built within the noncdb. The noncdb is still a 12c database, and it supports a db package called DBMS_PDB which enables the user to create a self-describing XML file describing the PDB for a possible future plug-in operation.

Having created this XML file, it is possible to plug-in the noncdb into the database using a CREATE PLUGGABLE DATABASE pdbtest using <xml-file>. You can optionally copy the noncdb source files to a new location or use it in-place.

Finally, connect to the new pdb, run the Oracle-supplied script named noncdb_to_pd.sql, a migration script, and open then open the database in read write mode.

Pre-requisite
  1. Non-cdb is Oracle 12c database
  2. It has been shutdown consistently and open read only.
  3. Storage has been configured for the new pluggable database.

Steps
  1. Start up the source noncdb database in read only mode after clean shut down.
  2. Run the dbms_pdb.describe procedure and generate the xml file
  3. Shut down the noncdb, and run the Create database statement.
  4. In the CDB, connect to the new container and run the migration script $ORACLE_HOME/rdbms/admin/non_cdb_to_pdb.sql.
  5. Open the new database in read-write mode at least once.


SQL> startup mount;
ORACLE instance started.

Total System Global Area 1068937216 bytes
Fixed Size                  2296576 bytes
Variable Size             385877248 bytes
Database Buffers          675282944 bytes
Redo Buffers                5480448 bytes
Database mounted.
SQL> alter database open read only;

Database altered.


SQL> begin
 dbms_pdb.describe(pdb_descr_file => '/home/oracle/noncdb1.xml');
end;
 /

PL/SQL procedure successfully completed.



SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit

[oracle@laboms ~]$ sqlplus sys/oracle@localhost:1521/cdb1 as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Mon Jul 14 16:12:17 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options
SQL> create pluggable database plug1
  2   using '/home/oracle/noncdb1.xml'
  3    copy
  4     file_name_convert=('/u01/app/oracle/oradata/noncdb1',
  5                        '/u01/app/oracle/oradata/cdb1/plug1')
  6   ;

Pluggable database created.

SQL> alter session set container=plug1;

Session altered.
SQL > @?rdbms/admin/noncdb_to_pdb.sql;




§        Finally when the script is done, open in read-write mode (the script takes a while to complete)

SQL> alter pluggable database open read write;

Pluggable database altered.




Sunday 13 July 2014

Privilege Analysis in Oracle Database 12c

Oracle security is based on the principle of granting only the minimum privileges required in order to perform a function. However access to a database can be gained by way of grant or by way of a role, and the chain by which access was gained can become complex to analyze. Third party database applications may grant more privileges than are necessary to the accounts which access the database, causing security vulnerabilities.




For these needs, Oracle Database 12c enables analysis of privilege and role usage, and reporting on the analysis, in order to see if more privileges have been granted than is required, so that they may be revoked, without affecting the ability of the applications to continue functioning.

Privilege analysis consists of these steps

  1. As a user who has privilege analysis rights i.e. the CAPTURE_ADMIN role, create a privilege capture policy and enable it.

  1. The database users continue accessing the database, while the privilege capture policy records usage of the privileges in database dictionary views for a representative observation period.

  1. The capture administrator can query the views to determine whether the granted privileges are being used.

  1. If there are any privileges and roles which are not being used, they may be revoked without impact to the applications.

The benefit of privilege capture is that it helps the dba to track which privileges are not required and revoke them, thus securing the database.

For example, if there is a role called HR_ADMIN which has been granted 2 privileges.

CDB name
PDB name
Role name
Grants
CDB1
HRPDB
HR_ADMIN
Create any table
Select on hr.employees


This role will be granted to a new user named SCOTT, whose usage will be monitored by creating a privilege analysis policy.

  1. First creating the role HR_ADMIN in the HRPDB database.

SQL*Plus: Release 12.1.0.1.0 Production on Sat Jul 12 11:55:44 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options

SQL> create role hr_mgr ;

Role created.

SQL> grant select on hr.employees to hr_mgr;

Grant succeeded.

SQL> grant create any table to hr_mgr;

Grant succeeded.


  1. Next we will create the user named SCOTT.

SQL> create user scott identified by tiger default tablespace example;

User created.

SQL> grant create session to scott;

Grant succeeded.

SQL> alter user scott quota 1m on example;

User altered.

SQL> grant create table to scott ;

User altered.

SQL>

  1. Granting the role HR_MGR to the user SCOTT.

SQL> grant hr_mgr to scott;

Grant succeeded.

SQL>


We have set up the user and his role at this point. Now to set up the capture.

1.     The CAPTURE_ADMIN privilege is necessary to administer privilege capture. This role is available to the user SYS. The new package DBMS_PRIVILEGE_CAPTURE enables privilege capture administration. It can create privilege captures for the entire database, for a specific role or a combination of role and session context. We want to create a role capture. The following pl/sql block creates the capture named CAPTURE1, which will capture usage of the role named HR_MGR.

SQL> -- create role capture policy
begin
 dbms_privilege_capture.create_capture(name => 'CAPTURE1',
                        description => 'capture demo',
                        type => dbms_privilege_capture.g_role,
                        roles => role_name_list('HR_MGR')
                        );
end;
/
SQL>   2    3    4    5    6    7    8 
PL/SQL procedure successfully completed.


2.     Enable the capture, to start the monitoring.

SQL> -- enable the capture
begin
  dbms_privilege_capture.enable_capture(name => 'CAPTURE1');
end;
/
SQL>   2    3    4 
PL/SQL procedure successfully completed.


3.     Disconnect and exit from this SQlPlus session. Otherwise the privilege usage views are not properly updated.

4.     Connect as the SCOTT user and exercise the privilege which is being monitored for usage. For example use the SELECT ANY TABLE to select data from hr.employees. And create a table in the local schema of scott.

[oracle@laboms ~]$ sqlplus scott/tiger@localhost:1521/hrpdb

SQL*Plus: Release 12.1.0.1.0 Production on Sun Jul 13 08:26:40 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Last Successful login time: Sat Jul 12 2014 12:24:36 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options

SQL> select * from hr.employees fetch first 10 rows only;

        ID NAME                 START_DAT      EMPNO
---------- -------------------- --------- ----------
         1 EMP_ 00000001        06-OCT-03      14211
         2 EMP_ 00000002        05-APR-04      14003
         3 EMP_ 00000003        17-NOV-02      15669
         4 EMP_ 00000004        16-JUN-00       8135
         5 EMP_ 00000005        02-OCT-03       5783
         6 EMP_ 00000006        20-MAR-05       8154
         7 EMP_ 00000007        06-MAY-02       1676
         8 EMP_ 00000008        11-DEC-01      15087
         9 EMP_ 00000009        16-MAY-04       7462
        10 EMP_ 00000010        25-MAR-00      11849

10 rows selected.

SQL> create table test as select * from hr.employees;

Table created.

SQL>


5.     Disconnect and exit from this SQlPlus session. Otherwise the privilege usage views are not properly updated.

6.     Connect as SYSBA and execute these procedures to disable and  DBMS_PRIVILEGE_CAPTURE.DISABLE_CAPTURE and DBMS_PRIVILEGE_CAPTURE.GENERATE_RESULT procedures.

[oracle@laboms ~]$ sqlplus sys/oracle@localhost:1521/hrpdb as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Sun Jul 13 08:33:10 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options

SQL>
SQL> begin
  dbms_privilege_capture.disable_capture(name => 'CAPTURE1');
  dbms_privilege_capture.generate_result(name => 'CAPTURE1');
end;
/  2    3    4    5 

PL/SQL procedure successfully completed.

7.     After this the privilege usage profile of the role HR_MGR can be queried from the data dictionary views like these.
View
Description
DBA_USED_OBJPRIVS
Details of object privileges used during the capture period.
DBA_UNUSED_SYSPRIVS
Details of system privileges not used during the captured.

8.     Below are the query results from the DBA_USED_OBJPRIVS view.
This views lists the object privileges used during the capture period.


SQL> column capture format a10
column username format a10
column used_role format a10
column obj_priv format a10
column object_owner format a15
column object_name format a15

SQL> select capture,username,used_role,obj_priv,object_owner,object_name from dba_used_objprivs;

CAPTURE    USERNAME   USED_ROLE  OBJ_PRIV   OBJECT_OWNER    OBJECT_NAME
---------- ---------- ---------- ---------- --------------- ---------------
CAPTURE1   SCOTT      HR_MGR     SELECT     HR              EMPLOYEES

9.     But more relevant to security administration are the granted access to the databases but NOT used during the representative capture period. That information is available from the dba_unused_*privs view.


SQL> column capture format a10
column username format a10
column used_role format a10
column obj_priv format a10
column object_owner format a15
column object_name format a15
column rolename format a20
column sys_priv format a20
SQL> SQL> SQL> SQL> SQL> SQL> SQL> SQL>
SQL>
SQL>
SQL>    select capture,rolename,sys_priv from dba_unused_sysprivs
  2  /

CAPTURE    ROLENAME             SYS_PRIV
---------- -------------------- --------------------
CAPTURE1   HR_MGR               CREATE ANY TABLE


The user SCOTT in this example did NOT use the CREATE ANY TABLE privilege, and this is exposed in the dba_unused_sysprivs view.

The privilege analysis feature enables the administrator to determine the CREATE ANY TABLE privilege is unused and can be revoked from the HR_MGR role.




Wednesday 9 July 2014

Sql*Loader 12.1 Express mode

Loading Data Using Sql*Loader Express Load mode.

Sql*Loader is Oracle’s high speed data loading utility. But, when working with Sql*Loader, it has always been necessary to first create a control file for the utility to work with.

Creating the loader control file requires getting the syntax, column positions, etc correct, and it can be an error prone process.

Starting Oracle Database 12c, Sql*Loader supports an Express Load mode, which does not require a loader control file to do simple loads.

Express Load mode eases the task of loading the data into the database, enables automatic parallelism and avoids need to split the input into multiple files for parallel loads.

There are a couple of conditions for express load mode to work
  1. The input data columns are simple data types – text, date-time, number only.
  2. The input data columns are in the same order as they appear in the table.
  3. The control file parameter should not be specified at the command line.

In the Express Load mode, the user simply invokes the loader and specifies a table name and user credentials. For example to load a text file called employees.dat into the table hr.employees in the database called noncdb1:


$\> sqlldr hr/hr@localhost:1521/noncdb1 table=employees


Because the control file parameter was not specified, Sql*Loader will launch in express load mode, and look for a file called employees.dat.

In this mode, Sql*Loader will look up the target table, structure, and write out a loader control file possible for future use in its log file, assuming:
1. The input file columns are comma-delimited.
2. Each record is new-line-terminated.
3. Input file column order is the same as employees table.
4. The user intended to append the existing table.


Sql*Loader will also attempt an external table mode load of the data.
1. Create a temp external table based on the input file.
2. Generate a script to do an INSERT INTO target select * from <temp-external-table>
3. Perform the insert.

The default load method for the express load option is the external table method.
Sql*Loader can also load data using one of these methods

§        Direct path load.
§        Conventional path load.

In Express Load mode, Sql*Loader will need a directory object for external table load, which points to the input file. It first looks at existing directory objects to see if there are any which point to the input file location, and if not will try to create a temporary one.

If the user who invoked Sql*Loader does not have the necessary privileges, then the directory creation fails, and Sql*Loader will fall back to using the direct path load and still be able to load the input file.

Below is an example of a Express Load mode to load a simple employees table. Consider an employees table which is being loaded, and has this structure.


SQL> desc employees
Name       Type         Nullable Default Comments
---------- ------------ -------- ------- --------
ID         NUMBER(5)    Y                        
NAME       VARCHAR2(20) Y                        
START_DATE DATE         Y                         
EMPNO      NUMBER(5)    Y                        


A text input file containing employee data records in comma-delimited format is available, containing about 1000 records. Its name is employees.dat


[oracle@laboms ~]$ cd /home/oracle
[oracle@laboms ~]$ ls -l employees.dat
-rwxr-xr-x 1 oracle oinstall 36368 Jul  9 16:03 employees.dat


The employees table is in a pluggable database named HRPDB, and belongs to the hr schema.

The loading session is started off from the $ prompt as shown.



[oracle@laboms ~]$ sqlldr hr/hr@localhost:1521/hrpdb table=employees

SQL*Loader: Release 12.1.0.1.0 - Production on Wed Jul 9 16:07:34 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Express Mode Load, Table: EMPLOYEES
Path used:      External Table, DEGREE_OF_PARALLELISM=AUTO

Table EMPLOYEES:
  1000 Rows successfully loaded.

Check the log files:
  employees.log
  employees_%p.log_xt
for more information about the load.
[oracle@laboms ~]$



Sql*Loader assumes that the input file has the same name as the target table. The output shows that Sql*Loader started up in Express Load mode, and used the external table method to load the table. 1000 records were loaded into the employees table.

Examining the log file employees.log, we see that

  1. A control file has been scripted for future use


Generated control file for possible reuse:
OPTIONS(EXTERNAL_TABLE=EXECUTE, TRIM=LRTRIM)
LOAD DATA
INFILE 'employees'
APPEND
INTO TABLE EMPLOYEES
FIELDS TERMINATED BY ","
(
  ID,
  NAME,
  START_DATE DATE,
  EMPNO
)
End of generated control file for possible reuse.



  1. Further down the log file, we see that it has created an temporary directory object, and an external table pointing to the input file employees.dat


created temporary directory object SYS_SQLLDR_XT_TMPDIR_00000 for path /home/oracle

enable parallel DML: ALTER SESSION ENABLE PARALLEL DML

creating external table "SYS_SQLLDR_X_EXT_EMPLOYEES"

CREATE TABLE "SYS_SQLLDR_X_EXT_EMPLOYEES"
(
  "ID" NUMBER(5),
  "NAME" VARCHAR2(20),
  "START_DATE" DATE,
  "EMPNO" NUMBER(5)
)
ORGANIZATION external
(
  TYPE oracle_loader
  DEFAULT DIRECTORY SYS_SQLLDR_XT_TMPDIR_00000
  ACCESS PARAMETERS
    REJECT ROWS WITH ALL NULL FIELDS
    (
      "ID" CHAR(255),
      "NAME" CHAR(255),
      "START_DATE" CHAR(255)
        DATE_FORMAT DATE MASK "DD-MON-RR",
      "EMPNO" CHAR(255)
    )
  )
  location
  (
    'employees.dat'
  )
)REJECT LIMIT UNLIMITED

  1. Does a direct-path insert from the external table to the target table.

INSERT /*+ append parallel(auto) */ INTO EMPLOYEES
(
  ID,
  NAME,
  START_DATE,
  EMPNO
)
SELECT
  "ID",
  "NAME",
  "START_DATE",
  "EMPNO"
FROM "SYS_SQLLDR_X_EXT_EMPLOYEES"

  1. Performs cleanup and exits. External directory is preserved (for future loads?)

dropping external table "SYS_SQLLDR_X_EXT_EMPLOYEES"

Table EMPLOYEES:
  1000 Rows successfully loaded.

Run began on Wed Jul 09 16:07:34 2014
Run ended on Wed Jul 09 16:07:37 2014

Elapsed time was:     00:00:02.50
CPU time was:         00:00:00.01


  1. Next, try to retrieve the records from the table

SQL> select * from employees fetch first 10 rows only;

        ID NAME                 START_DAT      EMPNO
---------- -------------------- --------- ----------
         1 EMP_ 00000001        06-OCT-03      14211
         2 EMP_ 00000002        05-APR-04      14003
         3 EMP_ 00000003        17-NOV-02      15669
         4 EMP_ 00000004        16-JUN-00       8135
         5 EMP_ 00000005        02-OCT-03       5783
         6 EMP_ 00000006        20-MAR-05       8154
         7 EMP_ 00000007        06-MAY-02       1676
         8 EMP_ 00000008        11-DEC-01      15087
         9 EMP_ 00000009        16-MAY-04       7462
        10 EMP_ 00000010        25-MAR-00      11849

10 rows selected.