Thursday, 12 March 2015

Oracle Database 12c Real-Time ADDM



Oracle 12c has a new automatic performance analysis/troubleshooting feature, known as the Real-Time ADDM.  This feature analyzes performance problems as soon as they are detected, and attempts to find solutions and saves its analysis as a Real-time ADDM analysis report.

The Oracle Database looks out for these performance problems, and tries to analyze them in rea-time whereas previously, they would get analyzed and the snapshot collection time only.

  • High Load
  • IO bound instance
  • CPU bound instance
  • Over allocated memory
  • Interconnect bound instance (RAC)
  • Session limit approaching 100%
  • Process limit approaching
  • Hung sessions
  • Deadlock detected

As per the Oracle documentation, the MMON process analyses the instance performance every 3 seconds and triggers a Real-time ADDM if any of the conditions above are detected.

So what does a real-time addm report look like? Below are two trials where an attempt was made to trigger the auto addm by deliberately triggering two of the events mentioned above
  1. A session memory over allocation trial – tiggers automatic addm.
  2. A deadlock – no automatic addm.

Memory over allocation trial – results in real-time addm.


In a test database, ensure that the init.ora parameter OPTIMIZER_ADAPTIVE_REPORTING_ONLY is set to FALSE, and execute a script that creates a large enough pl/sql collection until the pga consumed is bigger than the instance PGA_AGGREGATE_LIMIT value. Here is the script:


create or replace type  t_city is object ( city_name varchar(
100));
/
create or replace type t_city_coll is table of t_city;
/

declare
  v_city      varchar(
100) := 'testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest';
  v_city_coll t_city_coll := t_city_coll();
begin
  for i in
1 .. 10000000
    loop
    v_city_coll.extend();
    v_city_coll(v_city_coll.count()) := t_city(v_city);
  end loop;
end;


It fails after 6 minutes with ORA-4036 (PGA memory used exceeds PGA_AGGREGATE_LIMIT).
But it is interesting to see if it will trigger an real-time addm also.

Going to the Performance Hub and clicking on the ADDM tab, we can see a Real-Time ADDM report was generated indeed, below it is seen that it has a Report_id=121.



The screen grab below shows the content of the Real-Time ADDM report ADDM findings tab, which shows the findings and offers a couple of recommendations, which are to lower instance memory or kill the session consuming extra PGA.





Trying out real-time addm by triggering a deadlock


Trigger a deadlock error by using an autonomous transaction as follows. (This deadlock trialwas tried out on a NONCDB with compatible=12.1.0.0.0)

  1. Consider a table called emp with these 3 records.


SQL> select * from emp;

     EMPNO NAME                 JOIN_DATE END_DATE
---------- -------------------- --------- ---------
         1 emp1                 03-DEC-14
         2 emp2                 03-DEC-14
         3 emp3                 03-DEC-14


  1. Create a procedure name proc2 and proc1 as shown. Proc2 is called by Proc1 and proc2 executes as an autonomous transaction and tries to modify the same record as proc1, triggering the deadlock.

create or replace procedure proc2 is
pragma autonomous_transaction;
begin
delete from emp where empno=
1;
commit;
end;
/

create or replace procedure proc1 is
begin
 update emp set join_date=sysdate where empno=
1 ;
  proc2;
end;
/

  1. When called, proc2 will immediately trigger the deadlock condition and Oracle reports the error as shown.

SQL>
SQL> begin
  2    -- Call the procedure
  3    proc1;
  4  end;
  5  /

begin
  -- Call the procedure
  proc1;
end;

ORA-00060: deadlock detected while waiting for resource
ORA-06512: at "SCOTT.PROC2", line 4
ORA-06512: at "SCOTT.PROC1", line 4
ORA-06512: at line 3

SQL>


  1. But upon checking the Performance Tab in EM Express, no Real-time ADDM was triggered. After repeatedly invoking the procedure the Current ADDM findings only mentioned that the DELETE statement in the procedure was consuming a lot of db time.

Data Guard Physical Standby setup in 12c

Setting up a Physical Standby based Data Guard configuration using Oracle Database 12c.

In this post I will just explore the setting up of a simple data guard configuration consisting of a 12c primary database being protected by a physical standby. The configuration details are shown below.

Primary Host
Laboms.example.com
Standby Host
Labomsb.example.com
Primary Db
Oracle 12c with a single pluggable database PDB1.
Standby Db
Clone of primary being maintained by managed recovery.
Os
Oracle Enterprise Linux 64 bit.

 
1. Configure a fast recovery area, if not configured.

For example run these commands as sys:
 Alter system set db_recovery_file_dest=‘/u01/app/oracle/oradata/fra’ scope=spfile;
 Alter system set db_recovery_file_dest_size=10g scope=spfile ;


2. Enable forced logging. Run this sql as sysdba.

Alter database force logging;

3. Enable archiving. (Run these commands as sys on the primary from SqlPlus)

shutdown immediate;
startup mount;
Alter database archivelog;


4. Configure the init.ora parameters listed below.  

Run ALTER SYSTEM set ... commands for the listed parmeters from SqlPlus as sysdba.

ALTER SYSTEM set <PARAM>=<VALUE> scope=spfile;

(Replace param> and <value> with actual values.)
Do only the primary for now.

Parameter Name
Primary Value
Standby Value
Db_name
CDB1
CDB1
Compatible
12.1.0.2.0
Same as primary
Enable_pluggable_database
Already set in primary (true)
Same as primary
Db_unique_name
CDB1
CDB1_DR
Log_archive_config
DG_CONFIG=(CDB1,CDB1_DR)
Same as primary
Fal_server
CDB1_DR
CDB1
Db_file_name_convert
'cdb1_dr','cdb1'
'cdb1','cdb1_dr'
Log_file_name_convert
'cdb1_dr','cdb1'
'cdb1','cdb1_dr'
Db_recovery_file_dest
/u01/app/oracle/oradata/fra
Same as primary
Db_recovery_file_dest_size
10g
Same as primary
Log_archive_dest_1
Location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles)db_unique_name=CDB1
Location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=CDB1_DR
Log_archive_dest_2
Service=CDB1_DR VALID_FOR=(online_logfiles,primary_role db_unique_name=CDB1_DR
Service=CDB1 VALID_FOR=(online_logfiles,primary_role db_unique_name=CDB1


5. Configure the local naming aliases and listeners on both site.
It should be possible to connect to them in the nomount mode. This is required for duplication of the database to the standby site.

Site
Alias
Tnsnames entry
Both sites
CDB1
(DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = laboms.example.com)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = CDB1)))

Both sites
CDB1_DR
(DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = labomsb.example.com)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = CDB1_DR)))



6. On both sites, the listeners listened on port 1521.

Primary Listener Database Service static registration is shown in screenshot below:




7. Standby Listener Database Service static registration is shown in screenshot below:




8. Configure standby redo log for the primary. 
Let's say the primary has 3 online redo log groups. Then, as per the best practices recommendation by Oracle, there should be 4 standby redo log groups. Use the alter database add standby logfile command to create the logs. Keep all the log file group members the same size as the primary. Run the commands as sysdba replacing <path> and <size> as appropriate

alter database add standby logfile group 4 ('/u01/app/oracle/oradata/cdb1/sby4.ora') size 50m;
alter database add standby logfile group 5 ('/u01/app/oracle/oradata/cdb1/sby5.ora') size 50m;
alter database add standby logfile group 6 ('/u01/app/oracle/oradata/cdb1/sby6.ora') size 50m;
alter database add standby logfile group 7  ('/u01/app/oracle/oradata/cdb1/sby7.ora') size 50m;

Copy the oracle password file from primary to standby using ftp. The password file is in the location $ORACLE_HOME/dbs and its name is pwdCDB1. Ftp it to the standby (re-name it there to pwdCDB1_DR).

9. Create an pfile for the standby instance containing these lines 

DB_NAME=CDB1
ENABLE_PLUGGABLE_DATABASE=TRUE
COMPATIBLE=12.1.0.2.0
DB_UNIQUE_NAME=CDB1_DR
LOG_ARCHIVE_CONFIG='DG_CONFIG=(CDB1,CDB1_DR)'
FAL_SERVER=CDB1
DB_RECOVERY_FILE_DEST='/u01/app/oracle/oradata/fra'
DB_RECOVERY_FILE_DEST_SIZE=10g
LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=CDB1_DR'
LOG_ARCHIVE_DEST_2='SERVICE=CDB1 ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=CDB1_DR'
STANDBY_FILE_MANAGEMENT=AUTO
SGA_TARGET=800M
PGA_AGGREGATE_TARGET=600M
LOG_FILE_NAME_CONVERT='cdb1','cdb1_dr'
DB_FILE_NAME_CONVERT='cdb1','cdb1_dr'

10. From SqlPlus, start up the standby instance in nomount mode and using this pfile. 

For example run these command at os:

$ > Sqlplus sys/oracle@cdb1_dr as sysdba

And this at the resulting SQL prompt:

SQL> Startup nomount pfile='pfile.ora';

11.  Ensure that all the folders along the path of the CDB datafile are present on the standby.

If not present, create them. Below are the folders that were created on the standby, before the duplication task was started.




12. Startup rman for the duplication.
From rman connect to both the primary (as target) and standby (as auxiliary) and duplicate the database. We are collecting the rman ouput to a file called rman.log using the tee utility here.

$ > rman target sys/oracle@cdb1 auxiliary sys/oracle@cdb1_dr | tee rman.log
Rman > duplicate target database for standby from active database nofilenamecheck dorecover;


The output from the duplication are logged to rman.log… Partial results here…

Recovery Manager: Release 12.1.0.2.0 - Production on Thu Mar 12 14:58:01 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: CDB1 (DBID=842184548)
connected to auxiliary database: CDB1 (not mounted)

RMAN> 2>
Starting Duplicate Db at 12-MAR-15
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=12 device type=DISK
current log archived
starting media recovery

archived log for thread 1 with sequence 52 is already on disk as file /u01/app/oracle/oradata/fra/CDB1_DR/archivelog/2015_03_12/o1_mf_1_52_bj2gmo93_.arc
archived log for thread 1 with sequence 53 is already on disk as file /u01/app/oracle/oradata/fra/CDB1_DR/archivelog/2015_03_12/o1_mf_1_53_bj2gmplz_.arc
archived log for thread 1 with sequence 54 is already on disk as file /u01/app/oracle/oradata/fra/CDB1_DR/archivelog/2015_03_12/o1_mf_1_54_bj2gmqj6_.arc
archived log file name=/u01/app/oracle/oradata/fra/CDB1_DR/archivelog/2015_03_12/o1_mf_1_52_bj2gmo93_.arc thread=1 sequence=52
archived log file name=/u01/app/oracle/oradata/fra/CDB1_DR/archivelog/2015_03_12/o1_mf_1_53_bj2gmplz_.arc thread=1 sequence=53
archived log file name=/u01/app/oracle/oradata/fra/CDB1_DR/archivelog/2015_03_12/o1_mf_1_54_bj2gmqj6_.arc thread=1 sequence=54
media recovery complete, elapsed time: 00:00:02
Finished recover at 12-MAR-15
Finished Duplicate Db at 12-MAR-15

RMAN>

Recovery Manager complete.


13. Connect to the stanby database and start up managed recovery. 
You can query v$managed_standby as shown to view the progress of the redo apply.


SQL> recover managed standby database using current logfile disconnect;
Media recovery complete.

SQL> select client_process,process,status,sequence# from v$managed_standby;

CLIENT_P PROCESS   STATUS        SEQUENCE#
-------- --------- ------------ ----------
ARCH     ARCH      CONNECTED             0
ARCH     ARCH      CONNECTED             0
ARCH     ARCH      CONNECTED             0
ARCH     ARCH      CONNECTED             0
ARCH     RFS       IDLE                  0
LGWR     RFS       IDLE                 55
N/A      MRP0      APPLYING_LOG         55

7 rows selected.

The standby is receiving from primary the log file with sequence # 55 which is also the log file which is being applied. We have completed the data guard setup.

You can query the v$dataguard_config view for the configuration details.

SQL> select * from v$dataguard_config;

DB_UNIQUE_NAME       PARENT_DBUN                    DEST_ROLE         CURRENT_SCN     CON_ID
-------------------- ------------------------------ ----------------- ----------- ----------
CDB1                 NONE                           PRIMARY DATABASE      1266237          0
CDB1_DR              CDB1                           PHYSICAL STANDBY      1266338          0

SQL>


This completes the exercise. In a later post, I will try to configure statspack on the standby (which I will open in Active Data Guard mode). Statspack can be used to tune performance of queries that may run in standby in its Active Data Guard role.


Wednesday, 4 February 2015

New *_VC columns in Oracle database 12c dictionary views.



There are new columns named *_VC in some of the dictionary views in Oracle Database 12c . Those are VARCHAR versions of those columns which were previously defined as LONG.

This is useful in some situations. For example if you want to query the dictionary to list out the names of the materialized views defined on a specific base table.

You cannot query based on the user_mviews.query column which is long data type (long meaning unstructured binary data); it is not possible to use regular expressions and string compares to write a query like

select * from user_mviews v where CAST(v.QUERY AS CHAR) like '%EMP%';

You'd hit an error like this one.

SQL> select * from user_mviews v where CAST(v.QUERY AS CHAR) like '%EMP%';

select * from user_mviews v where CAST(v.QUERY AS CHAR) like '%EMP%'

ORA-00932: inconsistent datatypes: expected CHAR got LONG


As a workaround, it I possible to query the user_dependencies view, where there is a link between the mview and the depended-on table.


But in Oracle Database 12c, there are some dictionary views, like  *_views, *_constraints which have an additional varchar column storing the same content stored as the pre-existing long. The long columns are the ones used to store query or query fragments.


For example, consider the simple employee table below, which, by the way, has a constraint that start_date should be less than end_date:

SQL> select * from emp where empno=1;

     EMPNO NAME                 START_DATE  END_DATE
---------- -------------------- ----------- -----------
         1 emp1                 01-Jan-14   31-Jan-14


Well, I queried the new 12c column search_condition_vc on user_constraints, which is next to the pre-existing search_condition long column, and I got this output

SQL> ed
Wrote file afiedt.buf

  1  select table_name, search_condition, search_condition_vc from user_constraints
  2* where table_name='EMP'
SQL> /

TABLE_NAME      SEARCH_CONDITION               SEARCH_CONDITION_VC
--------------- ------------------------------ ------------------------------
EMP             start_date < end_date          start_date < end_date




Note the 2nd and 3rd columns:

search_condition    : -- older long column (START_DATE < END_DATE)
search_condition_vc : -- new varchar2 equivalent of long (START_DATE < END_DATE)

Some caveats still apply though, on what kinds of SQL can be captured in the new VC columns.

Wednesday, 7 January 2015

Managing containers in EM express



New Container Management capability in EM Express 12c release 2


The EM express in 12.1.0.1  offers a lightweight GUI for managing and administering the Oracle Database instance out of the box. You could not, however, administer the containers of a CDB.



But in EM Express 12.1.0.2, you can.  It now allows administration of the containers. There is a clickable link in the home page, which takes the user to the containers page.




The containers page looks like this. It is showing 2 container databases, PDB1 and PDB2, and it is seen that PDB1 is down whereas PDB2 is running.



A couple of useful administrative features are available. There is the ability to create, drop, plug and unplug pluggable databases. These features will be very useful to a dba.



You can select a pluggable database from the grid and access its performance hub page, as shown below. The performance hub allows monitoring the database performance, generate/view performance reports, identify issues, and tune high load sql.




Another interesting thing is that this CDB is using a resource plan, which can also be administered from within the EM Express.









CDB resource plans are two-tier plans.

The link below describes how CDB resource plans work.


At the first tier, which is at the CDB level, the administrator specifies, for each PDB, the proportion of server resources (basically its CPU and parallel servers) it will be allowed to use, by specifying a relative number (known as a share). So if there are 2 PDBS with PDB1 getting 2 shares AND PDB2 4, then the resources are shared in the 2:4 ratio.

A max utilization cap can be specified, so that no Pdb ever uses up all the available cpu or parallel query slaves. See screenshot for reference:
 
Further, it is possible to establish a default plan directive at the CDB level which will apply to newly created PDBs, for whom the directives are not explicitly set at creation time, see above.

At the second tier, which is now at the PDB level, resources can be allocated by using the 11g style resource consumer groups, resource plan directives, etc.  

In the screen grab below (part of the Containers screen again) it is seen that the pluggable database named PDB2 gets 3 shares. Its depicted graphically under the column CPU Resource Limits

While the pluggable database PDB1 gets only 2.

In this case, both PDBs are allowed to use all the available cpu and all the available parallel slaves, if they require them.

With the container management capability in new 12.1.0.2 release, EM Express has undergone significant improvements, which will be useful to a dba. He/she will now need to depend less on the Cloud Control.