Search This Blog

Showing posts with label Performance. Show all posts
Showing posts with label Performance. Show all posts

Monday, September 5, 2011

Automatic Workload Respository (AWR) 11gr2

On the OTN forums i used to see a lot of thread about AWR basic questions. like "how to generate , how to analyze etc etc".
It is worth to write on AWR today.

FAQ's
1. What is AWR reports?
AWR = Automatic workload respository introduced in 10gr1 to tune oracle database. it is a advance version of old snapshot method.
Automatic Workload Repository (AWR) is a repository of historical performance data that includes cumulative statistics for the system, sessions, individual SQL statements, segments, and services. These statistics are the foundation of performance tuning. By automating the gathering of database statistics for problem detection and tuning, AWR serves as the foundation for database self-management.


2. How to use or generate AWR reports?
AWR reports will generate automatically by oracle database in every 1 hour interval and retained in the database for 8 days.

If you want to generate AWR report manually then use the following steps
There few scripts needs to be run on SQL prompt to generate AWR report manually.
all the scripts is located under $ORACLE_HOME/rdbms/admin directory


SQL> conn system/oracle@db01



Connected.


SQL> --to generate AWR report there is a awrrpt.sql script needs to be run.


SQL> @D:\app\Administrator\11.2.0\db\RDBMS\ADMIN\awrrpt.sql


If database is RAC then run the following scripts
SQL>@%ORACLE_HOME%/rdbms/admin/awrgrpt.sql

If you have multiple instance configured on the same host and want to generate AWR report for the specific instance then run the following scripts
SQL>@%ORACLE_HOME%/rdbms/admin/awrrpti.sql

Generating AWR report on RAC for specific database instance
SQL>@%ORACLE_HOME%/rdbms/admin/awrgrpti.sql


Generating AWR report on RAC database
SQL>@%ORACLE_HOME%/rdbms/admin/awrgdrpt.sql


Generating AWR report for specific instance
SQL>@%ORACLE_HOME/rdbms/admin/awrddrpi.sql

Generating AWR report for specific instance on RAC

SQL>@%ORACLE_HOME/rdbms/admin/awrgdrpi.sql


Generating AWR report for an SQL statement
SQL>@%ORACLE_HOME%/rdbms/admin/awrsqrpt.sql

Generating AWR report for an SQL statement for specific instance
SQL>@%ORACLE_HOME%/rdbms/admin/awrsqrpti.sql

NOTE: you can get the SQL ID from V$SQL database view.



Generating AWR report to compare periods reports
We can compare the two different periods reports to analyze database bottleneck.
there is a scripts needs to be run called awrddrpt.sql

SQL>@%ORACLE_HOME%/rdbms/admin/awrddrpt.sql


Generating AWR report to compare periods reports for RAC


List of all AWR scripts.
D:\app\Administrator\11.2.0\db\RDBMS\ADMIN>dir awr*.sql

Volume in drive D is OraHome
Volume Serial Number is 50E3-4204

Directory of D:\app\Administrator\11.2.0\db\RDBMS\ADMIN

01-12-2006 07:31 PM 1,189 awrblmig.sql
This script will migrate the baseline data on a pre-11g database to the 11g database.

23-05-2005 08:13 PM 21,618 awrddinp.sql

13-05-2009 07:08 PM 7,600 awrddrpi.sql
This script will compare period reports for specific instance.

27-05-2005 08:22 PM 2,069 awrddrpt.sql
This script will compare period reports.

24-03-2009 10:38 AM 11,490 awrextr.sql
This script extract AWR details from EXPORT DATA PUMP FILE, suppose if you want to export AWR details from PROD and import in TEST db, in that case you can use this scripts (you need to give DATA PUMP DIRECTORY AND IT WILL WORK ONLY WITH DATAPUMP Utilities)

13-03-2008 04:01 PM 17,051 awrgdinp.sql
13-05-2009 07:08 PM 7,551 awrgdrpi.sql
This script compare period report for RAC database on specific instance.

29-04-2009 05:53 PM 1,958 awrgdrpt.sql
This script compare period report for RAC database.

13-03-2008 04:01 PM 7,719 awrginp.sql
29-04-2009 05:53 PM 1,578 awrgrpt.sql
This script generate AWR report for RAC database.

13-03-2008 04:01 PM 6,526 awrgrpti.sql
This script generate AWR report for RAC database on specific instance.

01-09-2004 06:38 PM 50,507 awrinfo.sql
This is one of the most important scripts that DBA needs to run frequenctly and check the followings :
this script will give us the below information:
1. SYSAUX space occupied by AWR reports in the particular tablespace
2. SYSAUX occupants 
3. SYSAUX usage (unregistered schemas)
4. Size estimated for AWR snapshots
5. Space usage by AWR components
6. And many more info... just run and read it.
This scripts must be run with SYSDBA user.

05-01-2005 03:25 PM 2,542 awrinpnm.sql
03-03-2006 05:47 PM 8,901 awrinput.sql
15-07-2009 10:20 AM 10,736 awrload.sql
This script will LOAD awr report from exported dumpfile to the target system. once you execute the script , then it will create the AWR staging schema to load awr report after loading report will be transffered to SYS schema. (DATA PUMP DIRECTORY required)

24-10-2003 02:20 PM 2,069 awrrpt.sql
This script will generate AWR report

18-04-2005 01:00 PM 7,801 awrrpti.sql
This script will generate AWR report on specific instance

05-01-2005 03:25 PM 6,919 awrsqrpi.sql
This script will generate AWR report on specific instance for SQL statement.

05-01-2005 03:25 PM 1,528 awrsqrpt.sql
This script will genearate AWR report for SQL statement.

By default system create the snapshot every hour but if you want to create snapshot on a specific time then use the following procedure.

To create snapshot manually

SQL> conn sys/oracle@db01 as sysdba

Connected.


SQL> begin
2 DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT ();
3 end;
4 /
PL/SQL procedure successfully completed.

If you want to drop snapshot manually
BEGIN

DBMS_WORKLOAD_REPOSITORY.DROP_SNAPSHOT_RANGE (low_snap_id => 22,
High_snap_id => 32, dbid => 3310949047);
END;
/

To modify default retention settings of snapshot.
BEGIN

DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS( retention => 43200,
interval => 30, topnsql => 100, dbid => 3310949047);
END;
/



In case of any doubt write me email at askdbataj


Thursday, August 4, 2011

How to resolve Deadlock ORA-00060 error

Deadlock.
What is deadlock?
Deadlock is a situation in which two or more users are waiting for data locked by each other. deadlocks prevent some transactions from continuing to work.

How to resolve deadlock?
We have to do nothing to resolve deadlock error (ORA-00060), oracle automatically detects and resolve the deadlock by rolling back one statement involve in the statement. releasing one set of conflicting row locks.

Why deadlock error occurs?
deadlock is completely application bug. ask DEVELOPER to look into the coding on particular form (query) which raise DEADLOCK.

What is the reason for deadlock?
Deadlocks most often occur when transactions explicitly override the default locking of oracle database. because oracle database not escalate locks and does not use read locks for queries. but does use row-level locking, deadlocks occurs infrequently.

Reference: asktom

Tuesday, July 26, 2011

Load Profile

Tips:
Oracle Certification - Tips

DBA Preparation - Tips

Practical DBA - Tips







Load Profile


1. Redo size: The amount of redo generated during this report.

2. Logical Reads: Calculated as (Consistent Gets XE "Consistent Gets" + DB Block Gets XE "DB Block Gets" = Logical Reads).
Block changes: The number of blocks modified during the sample interval.
Physical Reads: The number of requests for a block that caused a physical I/O operation.

3. Physical Writes: Number of physical writes performed.
User Calls: Number of user queries generated.

4. Parses: The total of all parses; both hard and soft.

5. Hard Parses: The parses requiring a completely new parse of the SQL statement. These consume both latches and shared pool area.

6. Soft Parses: Soft parses are not listed but derived by subtracting the hard parses from parses. A soft parse reuses a previous hard parse; hence it consumes far fewer resources.

7. Sorts, Logons, Executes and Transactions: All self-explanatory.


The following information is also available in the workload section:




1. % Blocks changed per Read:

The % Blocks changed per Read statistic indicates all blocks are retrieved for update.
Blocks Changed per Read % = (Block Changes + 100/ Logical Reads)

2. Recursive Call %:

Sometimes, in order to execute a SQL statement issued by a user, Oracle must issue additional statements. Such statements are called recursive calls or recursive SQL statements. For example, if you insert a row into a table that does not have enough space to hold that row, then Oracle makes recursive calls to allocate the space dynamically. Recursive calls are also generated when data dictionary information is not available in the data dictionary cache and must be retrieved from disk.

3. Rollback per transaction %:

4. Rows per Sort


Friday, July 17, 2009

Checkpoint Not Complete In Alert.log


In our production database we are getting a lots of message like below
/db/app/oracle/Db_1/oradata/orcl/redo04.log
Thu Jul 16 22:02:07 2009
Thread 1 cannot allocate new log, sequence 66271
Checkpoint not complete
Current log# 4 seq# 66270 mem# 0:

When I searched on web/metalink found the below workaround ...

"archive_lag_target" should be disabled to fix the above message in the alertlog file.

"alter system set archive_lag_target=0 scope=both;"

check metalink note: 435780.1

Saturday, May 31, 2008

Trace Event

How to Determine What Events/Parameters are Set in the Current Session?

Check Metalink Note: 28446.1

How To List All The Named Events Set For A Database?

Check Metalink Note: 436036.1

Wednesday, November 7, 2007

Advance Replication [Part-II Master Replication]

Part-II Master Replication

After Reading and understand Part-I. Now understand how to setup multimaster replication enviourment.

What is Master Replication?
Oracle has two types of master replication: single master replication and multimaster replication.
Note: Here I explain only Multimaster replication.

1. Multimaster replication includes multiple master sites, where each master site operates as an equal peer.
2. Multimaster replication, also known as peer-to-peer or n-way replication, is comprised of multiple master sites equally participating in an update-anywhere model.
3. Oracle database servers operating as master sites in a multimaster replication environment automatically work to converge the data of all table replicas, and ensure global transaction consistency and data integrity.
4. Conflict resolution is independently handled at each of the master sites.
5. Multimaster replication provides complete replicas of each replicated table at each of the master sites.



Why Use Multimaster Replication?
From a very basic point of view, replication is used to make sure that data is available when and where you need it.
But some of possibilities when we use multimaster replication are
Failover
Multimaster replication can be used to protect the availability of a mission critical database. For example, a multimaster replication environment can replicate data in your database to establish a failover site should the primary site become unavailable due to system or network outages. Such a failover site also can serve as a fully functional database to support application access when the primary site is concurrently operational.
Load Balancing
Multimaster replication is useful for transaction processing applications that require multiple points of access to database information for the following purposes:
1. Distributing a heavy application load
2. Ensuring continuous availability
3. Providing more localized data access

Multimaster Replication Process
There are two types of multimaster replication: asynchronous and synchronous.
asynchronous
Asynchronous replication, often referred to as store-and-forward replication, captures any local changes, stores them in a queue, and, at regular intervals, propagates and applies these changes at remote sites. With this form of replication, there is a period of time before all sites achieve data convergence.
synchronous
Synchronous replication, also known as real-time replication, applies any changes or executes any replicated procedures at all sites participating in the replication environment as part of a single transaction. If the data manipulation language (DML) statement or procedure fails at any site, then the entire transaction rolls back. Synchronous replication ensures data consistency at all sites in real-time.

Note: You can change the propagation mode from asynchronous to synchronous or vice versa for a master site.

How to setup Master sites?

Set below parameters
global_names
alter system set global_names=TRUE;

job_queue_processes
alter system set job_queue_processes = 1;

Considerations for Replicated Tables
Primary Keys
If possible, each replicated table should have a primary key. Where a primary key is not possible, each replicated table must have a set of columns that can be used as a unique identifier for each row of the table.
Foreign Keys
When replicating tables with foreign key referential constraints, Oracle Corporation recommends that you always index foreign key columns and replicate these indexes, unless no updates and deletes are allowed in the parent table. Indexes are not replicated automatically. To replicate an index, add it to the master group containing its table using either the Replication Management tool or the CREATE_MASTER_REPOBJECT procedure in the DBMS_REPCAT package.

Create DB for replication

Here I created three databases for replication
db01
db02

Setting up DB01 master site

Step 1 Connect as SYSTEM at a master site at DB01
SQL> conn system/oracle@db01
Connected.

Step 2 Create the replication administrator at DB01
SQL> CREATE USER repadmin IDENTIFIED BY repadmin;

User created.

Step 3 Grant privileges to the replication administrator at DB01
SQL> BEGIN
2 DBMS_REPCAT_ADMIN.GRANT_ADMIN_ANY_SCHEMA (
3 username => 'repadmin');
4 END;
5 /

PL/SQL procedure successfully completed.

Note: This privilege to create and manage a replicated enviourments

SQL> GRANT SELECT ANY DICTIONARY TO repadmin;

Grant succeeded.

Note: If you want your repadmin to be able to connect to the Replication Management tool,then grant SELECT ANY DICTIONARY to repadmin:

Step 4 Register the propagator at DB01
SQL> BEGIN
2 DBMS_DEFER_SYS.REGISTER_PROPAGATOR (
3 username => 'repadmin');
4 END;
5 /

PL/SQL procedure successfully completed.

Note: The propagator is responsible for propagating the deferred transaction queue to other master sites.

Step 5 Register the receiver at DB01
SQL> BEGIN
2 DBMS_REPCAT_ADMIN.REGISTER_USER_REPGROUP (
3 username => 'repadmin',
4 privilege_type => 'receiver',
5 list_of_gnames => NULL);
6 END;
7 /

PL/SQL procedure successfully completed.

Note: The receiver receives the propagated deferred transactions sent by the propagator from other master sites.
Step 6 Schedule purge at master site at DB01
SQL> CONNECT repadmin/repadmin@db01
Connected.
SQL>
SQL> BEGIN
2 DBMS_DEFER_SYS.SCHEDULE_PURGE (
3 next_date => SYSDATE,
4 interval => 'SYSDATE + 1/24',
5 delay_seconds => 0);
6 END;
7 /

PL/SQL procedure successfully completed.

Note: In order to keep the size of the deferred transaction queue in check, you should purge successfully completed deferred transactions. The SCHEDULE_PURGE procedure automates the purge process for you. You must execute this procedure as the replication administrator.
Note: Repeat above 6 steps in database DB02 master sites.

Step-2 Create Scheduled Links between the Master Sites

Step 1 Create database links between master sites.
The database links provide the necessary distributed mechanisms to allow the different replication sites to replicate data among themselves. Before you create any private database links, you must create the public database links that each private database link will use. You then must create a database link between all replication administrators at each of the master sites that you have set up.
SQL> conn system/oracle@db01
Connected.
SQL> create public database link db02 using 'db02';

Database link created.

SQL> conn repadmin/repadmin@db01
Connected.
SQL> create database link db02 connect to repadmin identified by repadmin;

Database link created.

SQL> conn system/oracle@db02
Connected.
SQL> create public database link db01 using 'db01';

Database link created.

SQL> conn repadmin/repadmin@db02
Connected.
SQL> create database link db01 connect to repadmin identified by repadmin;

Database link created.

Step 2 Define a schedule for each database link to create scheduled links.
Create a scheduled link by defining a database link when you execute the SCHEDULE_PUSH procedure. The scheduled link determines how often your deferred transaction queue is propagated to each of the other master sites. You need to execute the SCHEDULE_PUSH procedure for each database link that you created in Step 1. The database link is specified in the destination parameter of the SCHEDULE_PUSH procedure.
Even when using Oracle's asynchronous replication mechanisms, you can configure a scheduled link to simulate continuous, real-time replication. The scheduled links in this example simulate continuous replication.
SQL> conn repadmin/repadmin@db01
Connected.
SQL> ed
Wrote file afiedt.buf

1 BEGIN
2 DBMS_DEFER_SYS.SCHEDULE_PUSH (
3 destination => 'DB02',
4 interval => 'SYSDATE + (1/144)',
5 next_date => SYSDATE,
6 parallelism => 1,
7 execution_seconds => 1500,
8 delay_seconds => 1200);
9* END;
SQL> /

PL/SQL procedure successfully completed.

SQL> conn repadmin/repadmin@db02
Connected.
SQL> ed
Wrote file afiedt.buf

1 BEGIN
2 DBMS_DEFER_SYS.SCHEDULE_PUSH (
3 destination => 'DB01',
4 interval => 'SYSDATE + (1/144)',
5 next_date => SYSDATE,
6 parallelism => 1,
7 execution_seconds => 1500,
8 delay_seconds => 1200);
9* END;
SQL> /

PL/SQL procedure successfully completed.


Step-3 Create Master Group

Here I am using SCOTT schema and their objects for replicate purpose because SCOTT schema exists in Default Oracle installation.
Objects which use for replication in scott schema.
1. DEPT - table
2. EMP - table
Primary key must exists on each table which use for replication.
Note: If DB version is 10g then first unlock scott account;

Step-1 Unlock Scott Schema and reset schema password or Create New Schema for replication.
Step 2 Create the master group

SQL> conn repadmin/repadmin@db01
Connected.
SQL> begin
2 dbms_repcat.create_master_repgroup (
3 gname => 'scott_repg');
4 end;
5 /

PL/SQL procedure successfully completed.
Note: This step must be completed by the replication administrator.

Step 3 Add objects to master group.

SQL> ed
Wrote file afiedt.buf

1 BEGIN
2 DBMS_REPCAT.CREATE_MASTER_REPOBJECT (
3 gname => 'scott_repg',
4 type => 'TABLE',
5 oname => 'EMP',
6 sname => 'scott',
7 use_existing_object => TRUE,
8 copy_rows => FALSE);
9* END;
SQL> /

PL/SQL procedure successfully completed.

SQL> ed
Wrote file afiedt.buf

1 BEGIN
2 DBMS_REPCAT.CREATE_MASTER_REPOBJECT (
3 gname => 'scott_repg',
4 type => 'TABLE',
5 oname => 'DEPT',
6 sname => 'scott',
7 use_existing_object => TRUE,
8 copy_rows => FALSE);
9* END;
SQL> /

PL/SQL procedure successfully completed.

Note: Use the CREATE_MASTER_REPOBJECT procedure to add an object to your master group. In most cases, you probably will be adding tables and indexes to your master group, but you can also add procedures, views, synonyms, and so on.
Step 4 Add additional master sites.
SQL> ed
Wrote file afiedt.buf

1 BEGIN
2 DBMS_REPCAT.ADD_MASTER_DATABASE (
3 gname => 'scott_repg',
4 master => 'DB02',
5 use_existing_objects => TRUE,
6 copy_rows => FALSE,
7 propagation_mode => 'ASYNCHRONOUS');
8* END;
SQL> /

PL/SQL procedure successfully completed.
Note: You should wait until db02 appears in the DBA_REPSITES view before continuing. Execute the following SELECT statement in another SQL*Plus session to make sure that db02 has appeared:
SQL> conn repadmin/repadmin@db01
Connected.
SQL> SELECT DBLINK FROM DBA_REPSITES WHERE GNAME = 'SCOTT_REPG';
DBLINK
------------------------------------------------------------------
DB01
DB02
Note:
1. In this example, the use_existing_objects parameter in the ADD_MASTER_DATABASE procedure is set to TRUE because it is assumed that the scott schema already exists at all master sites. In other words, it is assumed that the objects in the scott schema are precreated at all master sites. Also, the copy_rows parameter is set to FALSE because it is assumed that the identical data is stored in the tables at each master site.
2. Here I am using replication Process is “ASYNCHRONOUS”.
3. Here I am skip “conflict resolution methods”
Note: Repeat Step-3 at DB02 master site.
Step 5 Generate replication support.
SQL> ed
Wrote file afiedt.buf

1 BEGIN
2 DBMS_REPCAT.GENERATE_REPLICATION_SUPPORT (
3 sname => 'scott',
4 oname => 'emp',
5 type => 'TABLE',
6 min_communication => TRUE);
7* END;
SQL> /

PL/SQL procedure successfully completed.

SQL> ed
Wrote file afiedt.buf

1 BEGIN
2 DBMS_REPCAT.GENERATE_REPLICATION_SUPPORT (
3 sname => 'scott',
4 oname => 'dept',
5 type => 'TABLE',
6 min_communication => TRUE);
7* END;
SQL> /

PL/SQL procedure successfully completed.
Note: You should wait until the DBA_REPCATLOG view is empty before resuming master activity. Execute the following SELECT statement to monitor your DBA_REPCATLOG view:

SQL> SELECT COUNT(*) FROM DBA_REPCATLOG WHERE GNAME = 'SCOTT_REPG';

COUNT(*)
----------
0
Step-6 Start replication.
After creating your master group, adding replication objects, generating replication support, and adding additional master databases, you need to start replication activity. Use the RESUME_MASTER_ACTIVITY procedure to "turn on" replication for the specified master group.
SQL> ed
Wrote file afiedt.buf

1 BEGIN
2 DBMS_REPCAT.RESUME_MASTER_ACTIVITY (
3 gname => 'scott_repg');
4* END;
SQL> /
PL/SQL procedure successfully completed.
Here we complete multimaster replication procedure.
Now let check what we did is correct or not :)
Connect with scott to db01 and insert new rows in dept table and commit;
SQL> conn scott/tiger@db01
Connected.
SQL> select * from dept;

DEPTNO DNAME LOC
---------- -------------- -------------
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON

SQL> insert into dept values (50,'DUMMY','DUMMY');

1 row created.
SQL> commit;
Commit complete.
SQL> select * from dept;

DEPTNO DNAME LOC
---------- -------------- -------------
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON
50 DUMMY DUMMY--- New row

--Wait some time ...and connect scott to db02 database and check that new row replicate in dept table.
SQL> conn scott/tiger@db02
Connected.
SQL> select * from dept;

DEPTNO DNAME LOC
---------- -------------- -------------
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON
50 DUMMY DUMMY --- New row
Note: you can perform any DML operation on Master Definition site “db01” and commit. Changes will update in DB02 site and if you perform any changes in DB02 db then update will changes in DB01 vice versa.
SQL> conn scott/tiger@db02
Connected.
SQL> select count(*) from dept;

COUNT(*)
----------
5
--Delete One row in dept table “db02”.
SQL> delete dept where dname = 'DUMMY';

1 row deleted.

SQL> commit;

Commit complete.

SQL> select count(*) from dept;

COUNT(*)
----------
4

SQL> disconn
SQL> conn scott/tiger@db01
Connected.
SQL> select count(*) from dept;

COUNT(*)
----------
4


Altering a Replicated Object



Step 1 Connect to the master definition site as the replication administrator.

SQL> conn repadmin/repadmin@db01
Connected.
Step 2 If necessary, then quiesce the master group.
SQL> ed
Wrote file afiedt.buf

1 BEGIN
2 DBMS_REPCAT.SUSPEND_MASTER_ACTIVITY (
3 gname => 'scott_repg');
4* END;
5 /

PL/SQL procedure successfully completed.
Step 3 In a separate SQL*Plus session, check the status of the master group you are quiescing.
Do not proceed until the group's status is QUIESCED.
SQL> SELECT GNAME, STATUS FROM DBA_REPGROUP;

GNAME STATUS
------------------------------ ---------
SCOTT_REPG QUIESCING
Step 4 Alter the replicated object.
SQL> ED
Wrote file afiedt.buf

1 BEGIN
2 DBMS_REPCAT.ALTER_MASTER_REPOBJECT (
3 sname => 'SCOTT',
4 oname => 'dept',
5 type => 'TABLE',
6 ddl_text => 'ALTER TABLE SCOTT.DEPT ADD (timestamp DATE)');
7* END;
SQL> /

PL/SQL procedure successfully completed.
Note: Add Timestamp column in scott.dept@db01 table
Step 5 Regenerate replication support for the altered object.
SQL> ed
Wrote file afiedt.buf

1 BEGIN
2 DBMS_REPCAT.GENERATE_REPLICATION_SUPPORT (
3 sname => 'scott',
4 oname => 'dept',
5 type => 'TABLE',
6 min_communication => TRUE);
7* END;
8 /

PL/SQL procedure successfully completed.
Step 6 In a separate SQL*Plus session, check if DBA_REPCATLOG is empty.
SQL> select count(*) from dba_repcatlog;

COUNT(*)
----------
0
Note: Do not proceed until this view is empty.
Step 7 Resume replication activity.
SQL> ed
Wrote file afiedt.buf

1 BEGIN
2 DBMS_REPCAT.RESUME_MASTER_ACTIVITY (
3 gname => 'scott_repg');
4* END;
SQL> /

PL/SQL procedure successfully completed.
Now check dept table in both database at “db01” or “db02”.
SQL> conn scott/tiger@db01
Connected.
SQL> desc dept
Name Null? Type
----------------------------------------- -------- -------------------
DEPTNO NOT NULL NUMBER(2)
DNAME VARCHAR2(14)
LOC VARCHAR2(13)
TIMESTAMP DATE

SQL> conn scott/tiger@db02
Connected.
SQL> desc dept
Name Null? Type
----------------------------------------- -------- -------------------
DEPTNO NOT NULL NUMBER(2)
DNAME VARCHAR2(14)
LOC VARCHAR2(13)
TIMESTAMP DATE


Listing the Master Sites Participating in a Master Group


Here I created two DB, one is master definition site and other is Master site.
Through below query we can fintout which one is master definition site or which is master site.
SQL> conn repadmin/repadmin@db01
Connected.
SQL> COLUMN GNAME HEADING 'Master Group' FORMAT A20
SQL> COLUMN DBLINK HEADING 'Sites' FORMAT A25
SQL> COLUMN MASTERDEF HEADING 'Master|Definition|Site?' FORMAT A10
SQL> SELECT GNAME, DBLINK, MASTERDEF
2 FROM DBA_REPSITES
3 WHERE MASTER = 'Y'
4 AND GNAME NOT IN (SELECT GNAME FROM DBA_REPSITES WHERE SNAPMASTER = 'Y'
)
5 ORDER BY GNAME;
Master
Definition
Master Group Sites Site?
-------------------- ------------------------- ----------
SCOTT_REPG DB02 N
SCOTT_REPG DB01 Y
Note: DB02 is master site and DB01 is master definition site.

Monday, November 5, 2007

Advance Replication [Part-I Introduction]

Dear Friends,

We can divide advance replication in three parts for better understanding.

1. Part-I Introduction
2. Part-II Multimaster Replication
3. Part-III Materialized view replication

Part-I Introduction

What is Advance Replication?
Replication is the process of copying and maintaining database objects, such as tables, in multiple databases that make up a distributed database system. Changes applied at one site are captured and stored locally before being forwarded and applied at each of the remote locations. Advanced Replication is a fully integrated feature of the Oracle server; it is not a separate server.

Replication uses distributed database technology to share data between multiple sites, but a replicated database and a distributed database are not the same. In a distributed database, data is available at many locations, but a particular table resides at only one location. For example, the employees table resides at only the ny.world database in a distributed database system that also includes the hk.world and la.world databases. Replication means that the same data is available at multiple locations. For example, the employees table is available at ny.world, hk.world, and la.world.

Some of the common reasons for using replication are:

Availability
Replication improves the availability of applications because it provides them with alternative data access options. If one site becomes unavailable, then users can continue to query or even update the remaining locations. In other words, replication provides excellent failover protection.
Performance
Replication provides fast, local access to shared data because it balances activity over multiple sites. Some users can access one server while other users access different servers, thereby reducing the load at all servers. Also, users can access data from the replication site that has the lowest access cost, which is typically the site that is geographically closest to them.
Disconnected computing
A materialized view is a complete or partial copy (replica) of a target table from a single point in time. Materialized views enable users to work on a subset of a database while disconnected from the central database server. Later, when a connection is established, users can synchronize (refresh) materialized views on demand. When users refresh materialized views, they update the central database with all of their changes, and they receive any changes that may have happened while they were disconnected.
Network load reduction
Replication can be used to distribute data over multiple regional locations. Then, applications can access various regional servers instead of accessing one central server. This configuration can reduce network load dramatically.


Architecture of advance replication


There are two types of Advance Replication

1. Multimaster Replication
2. Materialized view replication

1. Multimaster replication
Start --- > set up master site --- > Select object for replication --- > Create Group for selected objects --- > if conflict possible then configure conflict resolution ---> end

2. Materialized view replication
Start --- > set up master site -- > set up materialized view site --- > create materialized view group --- > end

Note: In Detail explanation is Part-II, III.

Replication Sites
A replication group can exist at multiple replication sites. Replication environments support two basic types of sites: master sites and materialized view sites. One site can be both a master site for one replication group and a materialized view site for a different replication group. However, one site cannot be both the master site and the materialized view site for the same replication group.

The differences between master sites and materialized view sites are the following:
Replication Group
1. A replication group at a master site is more specifically referred to as a master group.
2. A replication group at a materialized view site is based on a master group and is more specifically referred to as a materialized view group.

Replication Objects
1. A master site maintains a complete copy of all objects in a replication group,
Example: if the hr_repg master group contains the table’s employees and departments, then all of the master sites participating in a master group must maintain a complete copy of employees and departments.

2. Materialized views at a materialized view site can contain all or a subset of the table data within a master group.
Example: one materialized view site might contain only a materialized view of the employees table, while another materialized view site might contain materialized views of both the employees and departments tables.

Communication with master sites
1. All master sites in a multimaster replication environment communicate directly with one another to continually propagate data changes in the replication group
2. Materialized view sites contain an image, or materialized view, of the table data from a certain point in time.

Replication Objects
A replication object is a database object existing on multiple servers in a distributed database system. In a replication environment, any updates made to a replication object at one site are applied to the copies at all other sites. Advanced Replication enables you to replicate the following types of objects:
• Tables
• Indexes
• Views and Object Views
• Packages and Package Bodies
• Procedures and Functions
• User-Defined Types and Type Bodies
• Triggers
• Synonyms
• Index types
• User-Defined Operators

Replication Groups
In a replication environment, Oracle manages replication objects using replication groups. A replication group is a collection of replication objects that are logically related.
By organizing related database objects within a replication group, it is easier to administer many objects together. Typically, you create and use a replication group to organize the schema objects necessary to support a particular database application. However, replication groups and schemas do not need to correspond with one another. A replication group can contain objects from multiple schemas, and a single schema can have objects in multiple replication groups. However, each replication object can be a member of only one replication group.


Part-II

Thursday, November 1, 2007

Materialized view for Data Warehouse

Dear friends,

Did you hear about Materialized view?
Ans: Yes, No or May be.

Now I will explain you in details

What is materialized view?

1. Materialized view is normal database object like “table,index”
2. It is basically use for Data warehouse or Replication purpose.
3. Snapshot is synonym for materialized view.
4. A materialized view can be stored in the same database as its base tables or in a different database
5. A materialized view provides access to table data by storing the results of a query in a separate schema object. Unlike an ordinary view, which does not take up any storage space or contain any data, a materialized view contains the rows resulting from a query against one or more base tables or views.
6. A materialized view improve response time through query rewrite or reduce execution time.

Note: Query rewrite is a mechanism where Oracle or applications from the end user or database transparently improve query response time, by automatically rewriting the SQL query to use the materialized view instead of accessing the original tables. Query rewrites are particularly useful in a data warehouse environment.

What is syntax for materialized view?

Create materialized view “materialized_view_name”
Build [immediate / deferred]
Refresh [fash [on commit / demand]]
Enable query rewrite
As
Sql statement
/

Now Details explanation about syntax
1. Create materialized view “name’
I hope it is understood because self explanatory

2. Build [immediate / deferred]
1. Build immediate -- > materialized view populate with data on creation time from tables.
2. Build deferred --- > materialized view not populate with data on creation time, we have to populate later.

Eg:
With build immediate



create materialized view mymview1
build immediate
as
select ename,sum(sal) from emp group by ename;

SQL> select count(*) from emp;

COUNT(*)
----------
14

Note: materialized view populate with data on creation time.

With build deferred
create materialized view mymview2
build deferred
as
select ename,sum(sal) from emp group by ename;

SQL> select count(*) from mymview2;

COUNT(*)
----------
0

Note: materialized view not populate with data on creation time, we have to populate manually through DBMS_MVIEW.RERESH package.


3. Refresh [fast [on commit / demand]]
Refresh is method for populate materialized view with data whenever made changes in their base tables.

There are five refresh method for materialized view
1. refresh fast (materialized view log needed)
2. refresh on commit
3. refresh on demand

What is materialized view log?
A materialized view log is a schema object that records changes to a master table's data so that a materialized view defined on the master table can be refreshed incrementally.
Each materialized view log is associated with a single master table. The materialized view log resides in the same database and schema as its master table.

Eg:
With refresh on commit


SQL> select count(*) from emp;
COUNT(*)
----------
14
SQL> create materialized view MYVIEW1
2 build immediate
3 refresh on commit
4 enable query rewrite
5 as
6 select ename, sum(sal) from emp group by ename;
Materialized view created.

SQL> select count(*) from myview1;
COUNT(*)
----------
14
SQL> insert into emp (empno,ename,sal) values(1754,'TAJ',2500);
1 row created.
SQL> commit;
Commit complete.
SQL> select count(*) from myview1;
COUNT(*)
----------
15
With refresh on commit
SQL> select count(*) from emp;

COUNT(*)
----------
15

SQL> create materialized view MYVIEW5
2 build immediate
3 refresh on demand
4 enable query rewrite
5 as
6 select ename,sum(sal) from emp group by ename;

Materialized view created.

SQL> select count(*) from myview5;

COUNT(*)
----------
15

SQL> insert into emp (empno,ename,sal)values(1100,'xyz',1000);

1 row created.

SQL> commit;

Commit complete.

SQL> select count(*) from emp;

COUNT(*)
----------
16

SQL> select count(*) from myview5;

COUNT(*)
----------
15

SQL> exec dbms_mview.refresh('MYVIEW5');

PL/SQL procedure successfully completed.

SQL> select count(*) from myview5;

COUNT(*)
----------
16

Whenever any changes made in base tables and perform commit then materialized view refreshed.

4. Enable query rewrite
A materialized view is like a query with a result that is materialized and stored in a table. When a user query is found compatible with the query associated with a materialized view, the user query can be rewritten in terms of the materialized view. This technique improves the execution of the user query, because most of the query result has been precomputed. The query transformer looks for any materialized views that are compatible with the user query and selects one or more materialized views to rewrite the user query. The use of materialized views to rewrite a query is cost-based. That is, the query is not rewritten if the plan generated without the materialized views has a lower cost than the plan generated with the materialized views.
Note: query_rewrite_enabled=true AND query_rewrite_integrity=enforced must be set to use query rewrite feature

5. How Materialized view improve query performance.

Let’s check with demo

1. Create Big Table :)
SQL> create table big
2 as select * from all_objects
3 union all
4 select * from all_objects
5 union all
6 select * from all_objects
7 union all
8 select * from all_objects
9 union all
10 select * from all_objects;

Table created.

SQL> select count(*) from big;

COUNT(*)
----------
188995

2. Now execute below query and check query statistics
SQL> alter system flush shared_pool;

System altered.

SQL> set autotrace traceonly
SQL> set timing on
SQL> select owner,count(*) from big group by owner;

12 rows selected.

Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=624 Card=188024 By
tes=3196408)

1 0 SORT (GROUP BY) (Cost=624 Card=188024 Bytes=3196408)
2 1 TABLE ACCESS (FULL) OF 'BIG' (TABLE) (Cost=598 Card=1880
24 Bytes=3196408)





Statistics
----------------------------------------------------------
957 recursive calls
0 db block gets
2844 consistent gets---- > too big :)
0 physical reads
0 redo size
1028 bytes sent via SQL*Net to client
504 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
10 sorts (memory)
0 sorts (disk)
12 rows processed

3. Now Create materialized view and try
SQL> create materialized view MYVIEWBIG
2 build immediate
3 refresh on commit
4 enable query rewrite
5 as
6 select owner,count(*) from big group by owner;

Materialized view created.

SQL> alter system flush shared_pool;

System altered.

SQL> set autotrace traceonly
SQL> set timing on
SQL> select owner,count(*) from big group by owner;

12 rows selected.

Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=12 Bytes=36
0)

1 0 MAT_VIEW REWRITE ACCESS (FULL) OF 'MYVIEWBIG' (MAT_VIEW RE
WRITE) (Cost=3 Card=12 Bytes=360)





Statistics
----------------------------------------------------------
7356 recursive calls
0 db block gets
1313 consistent gets --- > just less then half :)
0 physical reads
0 redo size
1028 bytes sent via SQL*Net to client
504 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
49 sorts (memory)
0 sorts (disk)
12 rows processed


4. What Privileges Need for materialized view ?

To create a materialized view in your own schema, you must have the CREATE MATERIALIZED VIEW privilege and the SELECT privilege to any tables referenced that are in another schema. To create a materialized view in another schema, you must have the CREATE ANY MATERIALIZED VIEW privilege and the owner of the materialized view needs SELECT privileges to the tables referenced if they are from another schema. Moreover, if you enable query rewrite on a materialized view that references tables outside your schema, you must have the GLOBAL QUERY REWRITE privilege or the QUERY REWRITE object privilege on each table outside your schema.

5. How to check Status of a Materialized View?

SQL> select mview_name,query,rewrite_enabled,refresh_mode,build_mode,staleness,c
ompile_state
2 from user_mviews;

MVIEW_NAME QUERY R REFRES BUILD_MOD STALENESS COMPILE_ST
---------- -------------------- - ------ --------- ---------- ----------

MYBIG select owner, count( Y DEMAND IMMEDIATE NEEDS_COMP NEEDS_COMP
*) from big group by ILE ILE
owner

MYMVIEW2 select ename,sum(sal N DEMAND DEFERRED UNUSABLE NEEDS_COMP
) from emp group by ILE
ename

MYVIEW5 select ename,sum(sal Y DEMAND IMMEDIATE FRESH VALID
) from emp group by
ename

MYMVIEW3 select ename,sum(sal Y COMMIT IMMEDIATE UNUSABLE VALID
) from emp group by
ename

MYVIEW1 select ename, sum(sa Y COMMIT IMMEDIATE FRESH VALID
l) from emp group by

Explanation:
MVIEW_NAME -------- > Materialized View name
QUERY ------------- > Query that defines the materialized view
REFRESH_ENABLED --- > If query_rewrite_enabled=TRUE then value is “Y” otherwise “N”
REFRESH_MODE ------ > DEMAND, COMMIT, NEVER
BUILD_MODE -------- > IMMEDIATE, DEFERRED
STALENESS --------- > Relationship between the contents of the materialized view and the contents of the materialized view's masters.
COMPILE_STATE ----- > Validity of the materialized view with respect to the objects upon which it depends
VALID -> No objects depends has changed since the last validation
NEED_COMPILE -> Some objects depends materialized view has changed an “alter materialized view … compile;” statement is required to validate this materialized view

Sunday, October 28, 2007

Oracle Latch


My Signature Article



Tips:
Oracle Certification - Tips
DBA Preparation - Tips
Practical DBA - Tips

What is Latch ?

A mechanism to protect shared data structures in the System Global Area.
For Example: latches protect the list of users currently accessing the database and protect the data structures describing the blocks in the buffer cache.

A server or background process acquires a latch for a very short time while manipulating or looking at one of these structures.


During DB performance we will see LATCH event ...so what is latch event and how many types of latch events ?

A latch is a low-level internal lock used by Oracle to protect memory structures.

The latch free event is updated when a server process attempts to get a latch, and the latch is unavailable on the first attempt.

Most Popular latch wait event are ...



1. Latch: library cache or Latch: shared pool

Below is Possible causes for above both latch events.

1. Lack of statement reuse
2. Statements not using bind variables
3. Insufficient size of application cursor cache
4. Cursors closed explicitly after each execution
5. Frequent logon/logoffs
6. Underlying object structure being modified (for example truncate)
7. Shared pool too small

Below is Possible suggestion for aviod above both latch events.

1. Increase SHARED_POOL_SIZE parameter value.
2. Modify Frontend application to use BIND VARIABLE
3. Use CURSOR_SHARING='force' (for temporary basis)


2. Latch: cache buffers lru chain

Possible Causes

1. Inefficient SQL that accesses incorrect indexes iteratively (large index range scans) or many full table scans.
2. DBWR not keeping up with the dirty workload; hence, foreground process spends longer holding the latch looking for a free buffer
3. Cache may be too small

Possible Suggestion

1. Look for: Statements with very high logical I/O or physical I/O, using unselective indexes
2. Increase DB_CACHE_SIZE parameter value.
3. The cache buffers lru chain latches protect the lists of buffers in the cache. When adding, moving, or removing a buffer from a list, a latch must be obtained.

For symmetric multiprocessor (SMP) systems, Oracle automatically sets the number of LRU latches to a value equal to one half the number of CPUs on the system. For non-SMP systems, one LRU latch is sufficient.

Contention for the LRU latch can impede performance on SMP machines with a large number of CPUs. LRU latch contention is detected by querying V$LATCH, V$SESSION_EVENT, and V$SYSTEM_EVENT. To avoid contention, consider tuning the application, bypassing the buffer cache for DSS jobs, or redesigning the application.


Latch: cache buffers chains

Possible Causes

1. Repeated access to a block (or small number of blocks), known as a hot block
2. From AskTom:

Contention for these latches can be caused by:

- Very long buffer chains.
- very very heavy access to the same blocks.

Possible Suggestion

1. From AskTom:
When I see this, I try to see what SQL the waiters are trying to execute. Many times,
what I find, is they are all running the same query for the same data (hot blocks). If
you find such a query -- typically it indicates a query that might need to be tuned (to
access less blocks hence avoiding the collisions).

If it is long buffer chains, you can use multiple buffer pools to spread things out. You
can use DB_BLOCK_LRU_LATCHES to increase the number of latches. You can use both
together.

The cache buffers chains latches are used to protect a buffer list in the buffer cache. These latches are used when searching for, adding, or removing a buffer from the buffer cache. Contention on this latch usually means that there is a block that is greatly contended for (known as a hot block).

To identify the heavily accessed buffer chain, and hence the contended for block, look at latch statistics for the cache buffers chains latches using the view V$LATCH_CHILDREN. If there is a specific cache buffers chains child latch that has many more GETS, MISSES, and SLEEPS when compared with the other child latches, then this is the contended for child latch.

This latch has a memory address, identified by the ADDR column. Use the value in the ADDR column joined with the X$BH table to identify the blocks protected by this latch. For example, given the address (V$LATCH_CHILDREN.ADDR) of a heavily contended latch, this queries the file and block numbers:

SELECT OBJ data_object_id, FILE#, DBABLK,CLASS, STATE, TCH
FROM X$BH
WHERE HLADDR = 'address of latch'
ORDER BY TCH;

X$BH.TCH is a touch count for the buffer. A high value for X$BH.TCH indicates a hot block.

Many blocks are protected by each latch. One of these buffers will probably be the hot block. Any block with a high TCH value is a potential hot block. Perform this query a number of times, and identify the block that consistently appears in the output. After you have identified the hot block, query DBA_EXTENTS using the file number and block number, to identify the segment.

After you have identified the hot block, you can identify the segment it belongs to with the following query:

SELECT OBJECT_NAME, SUBOBJECT_NAME
FROM DBA_OBJECTS
WHERE DATA_OBJECT_ID = &obj;

In the query, &obj is the value of the OBJ column in the previous query on X$BH.


5. Latch: row cache objects

The row cache objects latches protect the data dictionary.
Suggestion: Increase SHARED_POOL_SIZE parameter to avoid this latch.

Thursday, October 25, 2007

System Statistics


What is system statistics ?


System statistics describe the system's hardware characteristics, such as I/O and CPU performance and utilization, to the query optimizer.

Why gather SYSTEM STATISTICS ?

When choosing an execution plan, then opitmizer estimate the I/O and CPU resources required for each query.
System statistics enable the query optimizer to more accurately estimate I/O and CPU costs, enabling the query optimizer to choose a better execution plan.

It is important to gather system statistics ?

System statistics give accurate cost for sql query so optimizer take good decision.
If you have good statistics then query take good decision and database performance is increase.

Many Oracle Guru's and Oracle Corp. Also recommended to gather system statistics.

How Gather System Statistics ?

Through DBMS_STATS package we can gather system statistics

Step1
SQL> exec dbms_stats.gather_system_stats('Start');

Step2
SQL>--Wait for some time ...it will 1 hr minimum or 2 hr or whole day according database load.

Step3
SQL exec dbms_stats.gather_system_stats('Stop');


Keep in mind.
1. User must granted DBA privilege role for gather system statistics.
2. After gather system statistics... Unlike table, index, or column statistics, Oracle does not invalidate already parsed SQL statements when system statistics get updated. All new SQL statements are parsed using new statistics.
3. Always Gather System statistics During Heavy Peak Load.


Where check gather system statistics data information ?

After Gather system statictics query sys.aux_stats$ view.

SQL> select pname, pval1 from sys.aux_stats$;

PNAME PVAL1
------------------------------ ----------
STATUS
DSTART
DSTOP
FLAGS 1
CPUSPEEDNW 904.86697
IOSEEKTIM 10
IOTFRSPEED 4096
SREADTIM .995
MREADTIM 1.701
CPUSPEED 1268
MBRC 16
MAXTHR
SLAVETHR

13 rows selected.

Note:
CPUSPEEDNW,IOSEEKTIM,IOTFRSPEED is New column comes with Oracle 10g. and these columns already populated before gather system statistics.


SREADTIM

Single block read time is the average time to read a single block randomly.

MREADTIM

Multiblock read is the average time to read a multiblock sequentially.

MBRC

Multiblock count is the average multiblock read count sequentially.

Active Session History (ASH)

Tips:
Oracle Certification - Tips

DBA Preparation - Tips
Practical DBA - Tips

Active Session History (ASH)


Oracle 10gr2 Introduce new option for capture performance problem.
ASH >>> Active session History <<<<
What is ASH ?

Whenever you want to know information about blocker and waiter identifiers and their associated transaction IDs and SQL.

About V$ACTIVE_SESSION_HISTORY


1. The V$ACTIVE_SESSION_HISTORY view provides sampled session activity in the instance.

2. Active sessions are sampled every second and are stored in a circular buffer in SGA.

3. Any session that is connected to the database and is waiting for an event that does not belong to the Idle wait class is considered as an active session.

4. This includes any session that was on the CPU at the time of sampling.

5. Each session sample is a set of rows and the V$ACTIVE_SESSION_HISTORY view returns one row for each active session per sample, returning the latest session sample rows first. Because the active session samples are stored in a circular buffer in SGA, the greater the system activity, the smaller the number of seconds of session activity that can be stored in the circular buffer. This means that the duration for which a session sample appears in the V$ view, or the number of seconds of session activity that is displayed in the V$ view, is completely dependent on the database activity.

6. Using the Active Session History enables you to examine and perform detailed analysis on both current data in the V$ACTIVE_SESSION_HISTORY view

7. Historical data in the DBA_HIST_ACTIVE_SESS_HISTORY view,
SQL identifier of SQL statement


What information provide ASH view ?


1. Object number, file number, and block number

2. Wait event identifier and parameters

3. Session identifier and session serial number

4. Module and action name

5. Client identifier of the session

6. Service hash identifier


How to generate ASH report ?

It is New feature of 10GR2(10.2.0.1.0)
For report creation we have to use ASHRPT.SQL script.
Located: In $ORACLE_HOME/rdbms/admin folder


How to run ASHRPT.SQL script


To generate a text report of ASH information, run the ashrpt.sql script at the SQL prompt:

@$ORACLE_HOME/rdbms/admin/ashrpt.sql

First, you need to specify whether you want an HTML or a text report.

Enter value for report_type: text

Specify the time frame to collect ASH information by first specifying the begin time in minutes prior to the system date.

Enter value for begin_time: -10
Note: here you have to just put number in minutes eg: 10 for 10 minutes

Next, enter the duration in minutes that the report for which you want to capture ASH information from the begin time. The default duration of system date minus begin time is accepted in the following example:

Enter value for duration:
Note: left blank for default value. Default value is SYSDATE

The report in this example will gather ASH information beginning from 10 minutes before the current time and ending at the current time. Next, accept the default report name or enter a report name. The default name is accepted in the following example:

Enter value for report_name:
Using the report name ashrpt_1_0310_0131.txt
Note: Left it blank for default value.

The session history report is generated.


For Futhere Detail please check metalink note: 243132.1

Sunday, October 21, 2007

AWR Report/Snapshot




My Signature Article



Tips:
Oracle Certification - Tips

DBA Preparation - Tips

Practical DBA - Tips




Prior Oracle 10g we use STATSPACK report to analyze instance health for performance. but now in 10g we use AWR(automatic workload repository) report to analyze Oracle Database health.


How to generate AWR report ?


It is very easy to generate AWR report in Oracle 10g.

We have to run just one sql script to generate AWR report.


There is TWO sql script to create AWR report.
1. awrrpt.sql
If we have only One Oracle Database then run awrrpt.sql sql script.

2. awrrpti.sql
If we have more than One Oracle Instance (Like RAC) then run awrrpti.sql script so that we can particular instance for awr report creation.



Location of AWR report sql script


$ORACLE_HOME/rdbms/admin


Example Of AWR report


AWR report Example


What is default interval period between two awr report ?


AWR report generate automatically for database every 1 hr interval period.
But Recommanded is 15 Minutes is enough in two snapshot for better performance bottleneck.


How to Manage & change interval period for awr report ?


There is two to modify or changes in AWR report.
1. Manually ---> through DBMS_WORKLOAD_REPOSITORY plsql package

2. GUI ----> OEM (Oracle Enterprise Manager)
Login to OEM ---> Administration Tag ----> Workload Section --->AUTOMATIC WORKLOAD REPOSITORY


How to Interpreting with AWR report ?


Below is Main Section in AWR report for concern.

Load Profile

In this section we know about Logical Read, Physical Read, Soft Parse, Hard Parse

Instance Efficiency Percentages

Many Oracle Guru's is not recommended to trust on Instance percentage ratio. becuase suppose if your instance percentage show BUFFER CAHCE HIT RATIO is 100% it is not sign for your database is very fast and running smootly.

Top 5 Timed Events

It is very important section in AWR report.
through this we can know most five wait event is effecting database performance.

Oracle Wait Event


SQL Statistics

It this section we will know about Execution time, cpu used, logical & physical read for top most sql statement.


Advisory Statistics

In this section we will get advice for PGA, BUFFER CACHE, SHARED POOL, JAVA POOL size for better performance.


Visit following link for reference interpreting awr report

Reference I

Thursday, October 18, 2007

EXECUTION PLAN


What is execution plan ?


To run a DML statement, Oracle might need to perform many steps. Each of these steps either retrieves rows of data physically from the database or prepares them in some way for the user issuing the statement. The combination of the steps Oracle uses to run a statement is called an execution plan. An execution plan includes an access method for each table that the statement accesses and an ordering of the tables (the join order). The steps of the execution plan are not performed in the order in which they are numbered.


How to generate execution plan ?


There are three ways to view execution plan

1. Through AUTOTRACE

SQL> set autotrace trace exp
SQL> select * from emp;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=14 Bytes=51
8)

1 0 TABLE ACCESS (FULL) OF 'EMP' (TABLE) (Cost=3 Card=14 Bytes
=518)


2. Through DBMS_XPLAN package

SQL> explain plan for select * from emp;

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------

Plan hash value: 3956160932

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 14 | 518 | 3 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| EMP | 14 | 518 | 3 (0)| 00:00:01 |
--------------------------------------------------------------------------

8 rows selected.

3. Through trace file (trace event 10046,sql_trace=true) and format trace file through TKPROF utility.

Wednesday, October 17, 2007

10046 trace event


What is 10046 trace events


whenever we want to tune any sql statement and want to know about waits and bind variable ...then we can use 10046 trace events.


How to use 10046 trace events


First define trace identifier for generated trace files so we can easily identify our trace files in UDUMP folder.

SQL> alter session set tracefile_identifier='MYSESSION';

Enable 10046 tracing.

SQL> alter session set events '10046 trace name context forever, level 8';

Now execute SQL Statements...

select e.empno, e.ename, d.dname, d.deptno
from emp e , dept d
where e.deptno = d.deptno;

Disable 10046 tracing.

SQL> alter session set events '10046 trace name context off';

Check UDUMP directory for generated trace file.
file name like "XXXX_MYSESSION.TRC"

C:\Oracle\admin\ora9i\udump>dir *MYSESSION.trc

Now generated trace file is RAW trace file and very hard to read and understand ...so through TKPROF utility create readable output file for generated trace file.

C:\Oracle\admin\ora9i\udump>tkprof ORA01904_MYSESSION.TRC c:\output.log

TKPROF: Release 10.1.0.5.0 - Production on Wed Oct 17 19:01:44 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Open c:\output.log file and analyze it. below is content of output.log file

select e.empno, e.ename, d.dname, d.deptno
from emp e , dept d
where e.deptno = d.deptno


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.29 0.31 2 0 2 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 3 30 2 14
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.29 0.31 5 30 4 14


Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: 66


---Below is execution plan

Rows Row Source Operation
------- ---------------------------------------------------
14 NESTED LOOPS
14 TABLE ACCESS FULL EMP
14 TABLE ACCESS BY INDEX ROWID DEPT
14 INDEX UNIQUE SCAN (object id 32119)



---Waits time information.


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
db file sequential read 3 0.00 0.00
SQL*Net message from client 2 56.49 56.51





10046 Trace Level


Level 1
Basic trace level. Like the standard SQL_TRACE trace file. Provides statistics for parse, execute, fetch, commit and rollback database calls.

Level 4
Displays bind variables

Level 8
Displays wait statistics

Level 12
Displays wait statistics and bind variables

Tuesday, October 16, 2007

Autotrace in SQLPLUS


What is AUTOTRACE


In SQL*Plus you can automatically get a report on the execution path used by the SQL optimizer and the statement execution statistics. The report is generated after a successful SQL DML statement, such as SELECT, DELETE, UPDATE or INSERT. It is useful for monitoring and tuning the performance of these DML statements.


How to configure AUTOTRACE in SQLPLUS for database user


1. Change directory path and connect with SYSTEM user to database
C:\>cd c:\oracle\product\10.1.0\db_1\rdbms\admin

C:\Oracle\product\10.1.0\Db_1\RDBMS\ADMIN>sqlplus /nolog

SQL*Plus: Release 9.0.1.0.1 - Production on Tue Oct 16 17:08:20 2007

(c) Copyright 2001 Oracle Corporation. All rights reserved.

SQL> conn system/manager
Connected.

2. run UTLXPLAN.SQL script for plan table.
SQL> @utlxplan.sql

3. Create public synonym for plan table or grant all privilege.
SQL> create public synonym plan_table for plan_table;

SQL> grant all on plan_table to public;

Grant succeeded.

4. Exit and again change directory path and connect with SYS user to database
SQL> exit
Disconnected from Oracle9i Enterprise Edition Release 9.0.1.1.1 - Production
With the Partitioning option
JServer Release 9.0.1.1.1 - Production

C:\Oracle\product\10.1.0\Db_1\RDBMS\ADMIN>cd\

C:\>cd c:\oracle\product\10.1.0\db_1\sqlplus\admin

C:\Oracle\product\10.1.0\Db_1\sqlplus\admin>sqlplus /nolog

SQL*Plus: Release 9.0.1.0.1 - Production on Tue Oct 16 17:12:07 2007

(c) Copyright 2001 Oracle Corporation. All rights reserved.

SQL> conn sys as sysdba
Enter password:
Connected.

5. run plustrce script and script must be run with SYS user.
SQL> @plustrce
SQL>
SQL> drop role plustrace;

Role dropped.

SQL> create role plustrace;

Role created.

SQL>
SQL> grant select on v_$sesstat to plustrace;

Grant succeeded.

SQL> grant select on v_$statname to plustrace;

Grant succeeded.

SQL> grant select on v_$mystat to plustrace;

Grant succeeded.

SQL> grant plustrace to dba with admin option;

Grant succeeded.

SQL> set echo off

6. Grant plustrace role to public.
SQL> grant plustrace to public;

Grant succeeded.


Configuring the Autotrace Report


SET AUTOTRACE OFF - No AUTOTRACE report is generated. This is the
default.
SET AUTOTRACE ON EXPLAIN - The AUTOTRACE report shows only the optimizer
execution path.
SET AUTOTRACE ON STATISTICS - The AUTOTRACE report shows only the SQL
statement execution statistics.
SET AUTOTRACE ON - The AUTOTRACE report includes both the
optimizer execution path and the SQL
statement execution statistics.
SET AUTOTRACE TRACEONLY - Like SET AUTOTRACE ON, but suppresses the
printing of the user's query output, if any.


Database Statistics for SQL Statements


recursive calls
Number of recursive calls generated at both the user and system level. Oracle maintains tables used for internal processing. When Oracle needs to make a change to these tables, it internally generates an internal SQL statement, which in turn generates a recursive call.

db block gets
Number of times a CURRENT block was requested.

consistent gets
Number of times a consistent read was requested for a block.

physical reads
Total number of data blocks read from disk. This number equals the value of "physical reads direct" plus all reads into buffer cache.

redo size
Total amount of redo generated in bytes.

bytes sent via SQL*Net to client
Total number of bytes sent to the client from the foreground processes.

bytes received via SQL*Net from client
Total number of bytes received from the client over Oracle Net.

SQL*Net roundtrips to/from client
Total number of Oracle Net messages sent to and received from the client.

sorts (memory)
Number of sort operations that were performed completely in memory and did not require any disk writes.

sorts (disk)
Number of sort operations that required at least one disk write.

rows processed
Number of rows processed during the operation.


EXAMPLE


SQL> set autotrace on
SQL> select * from dept;

DEPTNO DNAME LOC
---------- -------------- -------------
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'DEPT'




Statistics
----------------------------------------------------------
0 recursive calls
2 db block gets
2 consistent gets
0 physical reads
0 redo size
702 bytes sent via SQL*Net to client
503 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
4 rows processed

SQL> set autotrace traceonly
SQL> select * from dept;


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'DEPT'




Statistics
----------------------------------------------------------
0 recursive calls
2 db block gets
2 consistent gets
0 physical reads
0 redo size
702 bytes sent via SQL*Net to client
503 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
4 rows processed

SQL> set autotrace trace explain
SQL> select * from dept;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'DEPT'



SQL> set autotrace trace statistics
SQL> select * from dept;


Statistics
----------------------------------------------------------
0 recursive calls
2 db block gets
2 consistent gets
0 physical reads
0 redo size
702 bytes sent via SQL*Net to client
503 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
4 rows processed

Wednesday, October 10, 2007

CURSOR_SHARING parameter


CURSOR_SHARING



What is cursor_sharing parameters ?

CURSOR_SHARING determines what kind of SQL statements can share the same cursors.

What is possible values for this parameter ?

1. EXACT (default)
Only allows statements with identical text to share the same cursor.

2. FORCE
Forces statements that may differ in some literals, but are otherwise identical, to share a cursor, unless the literals affect the meaning of the statement.

3. SIMILAR
Causes statements that may differ in some literals, but are otherwise identical, to share a cursor, unless the literals affect either the meaning of the statement or the degree to which the plan is optimized.

When we have to / should use this parameter ?

Whenever you have lots of some sql statement to execute but differ in liternal and your application is not design to use BIND VARIABLE that time you can use CURSOR_SHARING=FORCE to share cursor for every sql statement which differ only in literal.

Consider setting CURSOR_SHARING to SIMILAR or FORCE if both of the following questions are true:

Are there statements in the shared pool that differ only in the values of literals?
Is the response time low due to a very high number of library cache misses?


Tom Kyte said "cursor_sharing=force" is not permanent solution for performance it is use for temporary basis during developer fix there bug in application... instead of this parameter use BIND VARIABLE.



Performance improvement when we set cursor_sharing=force ?

When your application use lots of similar sql statement but differ in literal then yes performance will improve when you set cursor_sharing=force.

Side Effects on database when set cursor_sharing=FORCE/SIMILAR

Forcing cursor sharing among similar (but not identical) statements can have unexpected results in some DSS applications, or applications that use stored outlines.

Oracle does not recommend setting CURSOR_SHARING to FORCE in a DSS environment or if you are using complex queries. Also, star transformation is not supported with CURSOR_SHARING set to either SIMILAR or FORCE. For more information, see the "OPTIMIZER_FEATURES_ENABLE Parameter".

BUG with cursor_sharing=FORCE/SIMILAR

In Oracle Version 8i there is bug when set cursor_sharing=force/similar.

We need to down our database to set this parameter

No, we can set this parameter when our database is open.

alter system set CURSOR_SHARING=force SCOPE=both;