Search This Blog

Sunday, December 30, 2007

redolog file status

Redo log file is very crucial for database recovery...
and most of time we confuse about "status" of v$log file for redo log files.


1. UNUSED
Whenever you create any redo log file ...that time v$log view showing status "UNUSED"

2. CURRENT
After Creation when redo log "lgwr" process use that file ....that time v$log view showing status "CURRENT"

3. ACTIVE
If database running in archivelog mode then after log switch "arch" process use log file and create archivelog file that time v$Log file view showing status "ACTIVE"

4. CLEARING
Whenever any log file is corrupted or missing ( which is not "active" or not "current") then we use "alter database clear logfile" statement to recreate corrupted log file ... that time v$log view showing status "CLEARING" after clearing it is showing "UNUSED"

5. INACTIVE
During archivelog file creation v$log view showing status "ACTIVE" and when archivelog file is created then it is showing "INACTIVE" for particular one redo log file at a time.


6. CLEARING_CURRENT
If any error occured during "alter database clear logfile" then v$log file view showing "CLEARING_CURRENT".

Tuesday, December 25, 2007

Recovery Catalog

Recovery Catalog

What is recovery catalog?
Recovery catalog is external database which use to store RMAN repository

What is RMAN repository?
RMAN repository is collection of target database which store information about RMAN backup, recovery and maintenance.

When recovery catalog is not in used then where RMAN repository stored?
When recovery catalog is not in used then RMAN repository store in CONTROLFILE.

Where we create recovery catalog?
We have two option for it.
1. We can create recovery catalog on target database through create separate tablespace for recovery catalog.
Or
2. We can create separate database for recovery catalog.

NOTE: if we have more than one database then separate database for recovery catalog is recommended instead of separate tablespace.

How recovery catalog store information about RMAN repository?
After creation of recovery catalog we have to manually register each target database to recovery catalog.

Should we need to take backup of recovery catalog?
Yes, We must take backup of recovery catalog because recovery catalog store most important information about RMAN backup, recovery and RMAN configuration if we lost recovery catalog then we can’t get those information so it is always recommended to take recovery catalog backup.

How take recovery catalog backup?
1. If recovery catalog created as separate tablespace on target database
Then just export recovery catalog tablespace and store in backup drive.

2. If recovery catalog created as separate database.
Then we can use any backup method which we use for target database for instance we can use user managed backup or we can also use RMAN method.

Is there any view for query information about recovery catalog?
Yes, actually for recovery catalog we need to create separate user. And after user and recovery catalog creation there is some view created in recovery catalog users with RC_ prefix. For instance: RC_DATABASE

How to configure recovery catalog for target database?
There are three steps for recovery catalog configuration.
1. Configure the database that will content the recovery catalog,
2. Create owner for recovery catalog
3. Create recovery catalog

SQL> conn sys@dev as sysdba
Enter password:
Connected.
SQL> create tablespace CATDB
2 datafile 'c:\oracle\catdb01.dbf' size 10m;

Tablespace created.

NOTE: Here I am create separate tablespace for recovery catalog on target database.

SQL>conn sys@dev as sysdba
Connected.

SQL> create user CATDB
2 identified by CATDB
3 default tablespace CATDB;

NOTE: Owner for recovery catalog

SQL> grant connect,resource to catdb;

Grant succeeded.

SQL> grant recovery_catalog_owner to CATDB;

Grant succeeded.

NOTE: Grant the RECOVERY_CATALOG_OWNER role to the schema owner. This role provides the user with all privileges required to maintain and query the recovery catalog.


C:\>rman

Recovery Manager: Release 10.1.0.5.0 - Production

Copyright (c) 1995, 2004, Oracle. All rights reserved.

RMAN> connect catalog catdb/catdb@dev

connected to recovery catalog database
recovery catalog is not installed

RMAN> create catalog tablespace CATDB;

recovery catalog created

NOTE: now recovery catalog is created.

SQL> conn catdb/catdb@dev
Connected.
SQL> select table_name from user_tables where rownum =1;

TABLE_NAME
------------------------------
RCVER

NOTE: after recovery catalog creation we can check all recovery catalog views is created in recovery catalog owner.


What we do after create recovery catalog?
Now just register target database to recovery catalog and store RMAN repository in recovery catalog.

C:\>rman target sys@dev catalog catdb/catdb@dev

Recovery Manager: Release 10.1.0.5.0 - Production

Copyright (c) 1995, 2004, Oracle. All rights reserved.

target database Password:
connected to target database: DEV (DBID=3718891706)
connected to recovery catalog database

RMAN> register database;

database registered in recovery catalog
starting full resync of recovery catalog
full resync complete

NOTE: “target sys@dev” is use for target database which we want to register to recovery catalog through recovery catalog owner “catdb/catdb@dev”.

Monday, December 24, 2007

Relocate Database Files


Three files is called database files
1. Datafiles & Tempfiles
2. Control files
3. Redo log files


Sometime we need to move database files from old to new location.

How to move DATAFILES from old to new location
Except SYSTEM,UNDO & TEMP tablespace

Step:
1. Take tablespace offline
alter tablespace tbsname OFFLINE;

2. through OS command MOVE datafiles from old to new location.
mv 'old location' to 'new location'

3. rename datafiles in database.
alter database rename file 'old location' to 'new location';

4. Take tablespace ONLINE
alter tablespace tbsname ONLINE;


For SYSTEM or UNDO tablespace

Step:
1. shutdown database
shutdown immediate;

2. move datafile old to new location through OS command.
mv old location new location

3. startup database with mount stage.
startup mount;

4. rename datafile in database
alter database
rename file 'old location' to 'new location';

5. open database for normal use
alter database open;


We can't relocate tempfile, if we need to relocate tempfile then best option is recreate temp tablespace with new location.




How to move REDO LOG MEMBER from old to new location


Step:
1. shutdown database
shutdown immediate;

2. mv redo log member from old to new location
mv oldlocation newlocation

3. startup database with mount stage
startup mount;

4. rename redo log member in database
alter database rename file 'old location' to 'new location';

5. open database for normal use
alter database open;



How to move control file old to new location



Step:
1. shutdown database
shutdown immediate;

2. mv controlfile to old to new location
mv old location new location

3. edit CONTROL_FILES parameter in pfile and change location from old to new
CONTROL_FILES='new location'

4. recreate SPFILE from PFILE
create spfile from pfile;

5. startup database
startup;

Wednesday, December 5, 2007

OS Authentication

• Once authenticated by the operating system, users can connect to Oracle more conveniently, without specifying a user name or password. For example, an operating-system-authenticated user can invoke SQL*Plus and skip the user name and password prompts by entering the following:
SQLPLUS /
• With control over user authentication centralized in the operating system, Oracle need not store or manage user passwords, though it still maintains user names in the database.
• Audit trails in the database and operating system use the same user names.


Note: When an operating system is used to authenticate database users, managing distributed database environments and database links requires special care.


OS Authentication Configuration Steps


1. Create OS user
2. Set below parameters.
alter system set os_authent_prefix = ‘OPS$’ scope=spfile;
Note: Default parameter value is “OPS$”

If OS user is member of domain group then also set below parameter.

alter system set remote_os_authent=TRUE scope=spfile;
Note: Default parameter value is FALSE,

3. Edit sqlnet.ora file and set below line to NTS
Sqlnet.authentication_services=(NTS)

4. Bounce database
shutdown immediate
startup

5. Create Oracle User for OS authentication

First check OS username
select sys_context(‘USERENV’,’OS_USER’) from dual;

create user “OPS$USERNAME” identified by EXTERNALLY;

Note: Oracle username is must be same as OS user name and must be started with “OPS$”.

6. Test to connect through OS authentication

sqlplus /

Monday, November 26, 2007

Data file fragmentation and Data file resize

There is no fragmentation at datafile level but sometime datafile is created in big size and actually data in datafile is very less then we can resize datafile to reduce database size.
We can reduce datafile size upto HWM (High Water Mark)

SQL> select df.file_name,round(df.bytes/1024/1024) TotalSize,fs.FreeSize
from dba_data_files df,
(select file_id,round(sum(bytes/1024/1024)) FreeSize
from dba_free_space
group by file_id
) fs
where df.file_id=fs.file_id(+)
order by 2,3 desc

FILE_NAME TOTALSIZE FREESIZE
-------------------------------------------------- ---------- ----------
D:\ORACLE\PRODUCT\10.1.0\ORADATA\USERS01.DBF 46 1
D:\BIG01.DBF 100 100
D:\ORACLE\PRODUCT\10.1.0\ORADATA\UNDOTBS01.DBF 465 436
D:\ORACLE\PRODUCT\10.1.0\ORADATA\SYSTEM01.DBF 620 63
D:\ORACLE\PRODUCT\10.1.0\ORADATA\SYSAUX01.DBF 700 32


Note: Size in MB (Mega Bytes)

Two datafiles which we can reduce 1.big01.dbf 2.undotbs01.dbf

SQL> alter database
datafile ‘d:\big01.dbf’ resize 50m;

Database altered.

SQL> alter database
datafile 'd:\oracle\product\10.1.0\oradata\undotbs01.dbf' resize 100m;
alter database
*
ERROR at line 1:
ORA-03297: file contains used data beyond requested RESIZE value

http://dbataj.blogspot.com/2007/01/ora-03297.html

Note: ORA-03297 error happens because we are trying to reduce datafile below HWM and it is not possible. Go through above link and check from where you can reduce datafile size.

Index Fragmentation / Rebuild

How to find index is fragmented?

First analyze index

SQL>analyze index INDEX_NAME validate structure;


Then query INDEX_STATS view
1. If del_lf_rows/lf_rows is > .2 then index should be rebuild.
2. If height is 4 then index should be rebuild.
3. If lf_rows is lower than lf_blks then index should be rebuild.

SQL> column status format a10
SQL> select trunc((del_lf_rows/lf_rows)*100,2)||'%' "status" from index_stats;

status
----------
21.83%


How to remove index fragmentation?

There are two way to remove fragmentation.
1. index coalesce
2. index rebuild

What is difference between coalesce and rebuild please go through below link for more details
http://download.oracle.com/docs/cd/B14117_01/server.101/b10739/indexes.htm#g1007548

SQL> alter index IDX_OBJ_ID coalesce;

SQL> alter index IDX_OBJ_ID rebuild;

SQL> alter index IDX_OBJ_ID rebuild online;


Note: If any DML statement is running on base table then we have to use ONLINE keyword with index rebuilding.


SQL> analyze index idx_obj_id validate structure;

Index analyzed.

SQL> select trunc((del_lf_rows/lf_rows)*100,2)||'%' "status" from index_stats;

status
-------
40.85%

SQL> alter index IDX_OBJ_ID rebuild online;

Index altered.

SQL> analyze index idx_obj_id validate structure;

Index analyzed.

SQL> select trunc((del_lf_rows/lf_rows)*100,2)||'%' "status" from index_stats;

status
--------
0%


Note: Index rebuild when index is fragmented or it is needed, otherwise index rebuilding is myth for improve performance.

Sunday, November 25, 2007

ORA-00600


Database: 10gr1(10.1.0.5.0)
OS: windows 2003

ORA-00600: internal error code, arguments: [17302], [20], [], [], [], [], [], []


It is reported as bug and fixed in 10gr2.

Metalink Note: 301853.1

Monday, November 12, 2007

How to Clear Oracle Certification ... ?

Tips:
Oracle Certification - Tips

DBA Preparation - Tips

Practical DBA - Tips


How to Clear Oracle Certification?

There are couple of questions come in mind when anyone think about oracle certification.

1. Is Oracle Certification is beneficial for Job Seekers?
You know what is certification…?
It is just some couple of questions which based 80% on concept and theory.
And any Job opening wants Practical DBA means Practical Experience with Theory.

So as per my advice any oracle newbie first get some practical experience then go for certification.


2. How to prepare for Oracle Certification?
Oracle Certification is examination, so some rules apply which we follow in our college examination.
1. First define which Oracle Paper you want to write.
Go through below link and choose your paper
http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=42&p_org_id=1080544&lang=US
2. Second check Exam Topic, Objective.
Suppose we choose “izo-007” introduction to SQL paper. So now check what is exam topic is come in “introduction to SQL paper”. Because SQL is very big topic and exam question based on some most important chapters.
Check Exam topic for “introduction to SQL” paper
http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=41&p_org_id=1080544&lang=US&p_exam_id=1Z0_001#3

3. What is Oracle Exam format?
Whenever anybody going to write first time oracle paper then always worry about exam format. Now don’t worry Oracle provide demo for exam format.
1. Oracle Exam is objective types
2. For demo version please go through below link (paper:iz0-007)
http://www.selftestsoftware.com/demo/default.aspx

3. What material we used for preparation?
1. Best material is “student guide” simple, easy and short.
Student Guide books comes with Oracle Course Kits.
2. Oracle Documentation
http://tahiti.oracle.com
3. Practice Questions
On internet there are so many site which provide us practice question for oracle certification.

http://www.google.ae/search?hl=en&q=practice+question+oracle&meta=

4. Looking for Dumps?
You know what is Dumps?
Dumps are real question which comes in Oracle Certification.
Some centre sold exam question like dumps for money and it is illegal please check it below link
http://www.oracle.com/global/us/education/certification/canagreemt.html
And suppose you will get dumps from anywhere …so please don’t depend on it.
It is not sure all question comes from dumps.
Because some prometric centre keeps real question in their centre and refresh periodically. But some prometric centre like oracle prometric centre use main server through internet.




Related Links

1. http://dbataj.blogspot.com/2007/10/why-i-choose-oracle-as-my-career.html

2. http://dbataj.blogspot.com/2007/09/faqs-about-oracle-certification.html

3. http://dbataj.blogspot.com/2007/08/oracle-plsql-developer-certified.html

4. http://dbataj.blogspot.com/2007/08/oracle-11i-apps-dba.html

5. http://dbataj.blogspot.com/2007/03/oracle-certified-professional.html

Wednesday, November 7, 2007

Advance Replication [Part-II Master Replication]

Part-II Master Replication

After Reading and understand Part-I. Now understand how to setup multimaster replication enviourment.

What is Master Replication?
Oracle has two types of master replication: single master replication and multimaster replication.
Note: Here I explain only Multimaster replication.

1. Multimaster replication includes multiple master sites, where each master site operates as an equal peer.
2. Multimaster replication, also known as peer-to-peer or n-way replication, is comprised of multiple master sites equally participating in an update-anywhere model.
3. Oracle database servers operating as master sites in a multimaster replication environment automatically work to converge the data of all table replicas, and ensure global transaction consistency and data integrity.
4. Conflict resolution is independently handled at each of the master sites.
5. Multimaster replication provides complete replicas of each replicated table at each of the master sites.



Why Use Multimaster Replication?
From a very basic point of view, replication is used to make sure that data is available when and where you need it.
But some of possibilities when we use multimaster replication are
Failover
Multimaster replication can be used to protect the availability of a mission critical database. For example, a multimaster replication environment can replicate data in your database to establish a failover site should the primary site become unavailable due to system or network outages. Such a failover site also can serve as a fully functional database to support application access when the primary site is concurrently operational.
Load Balancing
Multimaster replication is useful for transaction processing applications that require multiple points of access to database information for the following purposes:
1. Distributing a heavy application load
2. Ensuring continuous availability
3. Providing more localized data access

Multimaster Replication Process
There are two types of multimaster replication: asynchronous and synchronous.
asynchronous
Asynchronous replication, often referred to as store-and-forward replication, captures any local changes, stores them in a queue, and, at regular intervals, propagates and applies these changes at remote sites. With this form of replication, there is a period of time before all sites achieve data convergence.
synchronous
Synchronous replication, also known as real-time replication, applies any changes or executes any replicated procedures at all sites participating in the replication environment as part of a single transaction. If the data manipulation language (DML) statement or procedure fails at any site, then the entire transaction rolls back. Synchronous replication ensures data consistency at all sites in real-time.

Note: You can change the propagation mode from asynchronous to synchronous or vice versa for a master site.

How to setup Master sites?

Set below parameters
global_names
alter system set global_names=TRUE;

job_queue_processes
alter system set job_queue_processes = 1;

Considerations for Replicated Tables
Primary Keys
If possible, each replicated table should have a primary key. Where a primary key is not possible, each replicated table must have a set of columns that can be used as a unique identifier for each row of the table.
Foreign Keys
When replicating tables with foreign key referential constraints, Oracle Corporation recommends that you always index foreign key columns and replicate these indexes, unless no updates and deletes are allowed in the parent table. Indexes are not replicated automatically. To replicate an index, add it to the master group containing its table using either the Replication Management tool or the CREATE_MASTER_REPOBJECT procedure in the DBMS_REPCAT package.

Create DB for replication

Here I created three databases for replication
db01
db02

Setting up DB01 master site

Step 1 Connect as SYSTEM at a master site at DB01
SQL> conn system/oracle@db01
Connected.

Step 2 Create the replication administrator at DB01
SQL> CREATE USER repadmin IDENTIFIED BY repadmin;

User created.

Step 3 Grant privileges to the replication administrator at DB01
SQL> BEGIN
2 DBMS_REPCAT_ADMIN.GRANT_ADMIN_ANY_SCHEMA (
3 username => 'repadmin');
4 END;
5 /

PL/SQL procedure successfully completed.

Note: This privilege to create and manage a replicated enviourments

SQL> GRANT SELECT ANY DICTIONARY TO repadmin;

Grant succeeded.

Note: If you want your repadmin to be able to connect to the Replication Management tool,then grant SELECT ANY DICTIONARY to repadmin:

Step 4 Register the propagator at DB01
SQL> BEGIN
2 DBMS_DEFER_SYS.REGISTER_PROPAGATOR (
3 username => 'repadmin');
4 END;
5 /

PL/SQL procedure successfully completed.

Note: The propagator is responsible for propagating the deferred transaction queue to other master sites.

Step 5 Register the receiver at DB01
SQL> BEGIN
2 DBMS_REPCAT_ADMIN.REGISTER_USER_REPGROUP (
3 username => 'repadmin',
4 privilege_type => 'receiver',
5 list_of_gnames => NULL);
6 END;
7 /

PL/SQL procedure successfully completed.

Note: The receiver receives the propagated deferred transactions sent by the propagator from other master sites.
Step 6 Schedule purge at master site at DB01
SQL> CONNECT repadmin/repadmin@db01
Connected.
SQL>
SQL> BEGIN
2 DBMS_DEFER_SYS.SCHEDULE_PURGE (
3 next_date => SYSDATE,
4 interval => 'SYSDATE + 1/24',
5 delay_seconds => 0);
6 END;
7 /

PL/SQL procedure successfully completed.

Note: In order to keep the size of the deferred transaction queue in check, you should purge successfully completed deferred transactions. The SCHEDULE_PURGE procedure automates the purge process for you. You must execute this procedure as the replication administrator.
Note: Repeat above 6 steps in database DB02 master sites.

Step-2 Create Scheduled Links between the Master Sites

Step 1 Create database links between master sites.
The database links provide the necessary distributed mechanisms to allow the different replication sites to replicate data among themselves. Before you create any private database links, you must create the public database links that each private database link will use. You then must create a database link between all replication administrators at each of the master sites that you have set up.
SQL> conn system/oracle@db01
Connected.
SQL> create public database link db02 using 'db02';

Database link created.

SQL> conn repadmin/repadmin@db01
Connected.
SQL> create database link db02 connect to repadmin identified by repadmin;

Database link created.

SQL> conn system/oracle@db02
Connected.
SQL> create public database link db01 using 'db01';

Database link created.

SQL> conn repadmin/repadmin@db02
Connected.
SQL> create database link db01 connect to repadmin identified by repadmin;

Database link created.

Step 2 Define a schedule for each database link to create scheduled links.
Create a scheduled link by defining a database link when you execute the SCHEDULE_PUSH procedure. The scheduled link determines how often your deferred transaction queue is propagated to each of the other master sites. You need to execute the SCHEDULE_PUSH procedure for each database link that you created in Step 1. The database link is specified in the destination parameter of the SCHEDULE_PUSH procedure.
Even when using Oracle's asynchronous replication mechanisms, you can configure a scheduled link to simulate continuous, real-time replication. The scheduled links in this example simulate continuous replication.
SQL> conn repadmin/repadmin@db01
Connected.
SQL> ed
Wrote file afiedt.buf

1 BEGIN
2 DBMS_DEFER_SYS.SCHEDULE_PUSH (
3 destination => 'DB02',
4 interval => 'SYSDATE + (1/144)',
5 next_date => SYSDATE,
6 parallelism => 1,
7 execution_seconds => 1500,
8 delay_seconds => 1200);
9* END;
SQL> /

PL/SQL procedure successfully completed.

SQL> conn repadmin/repadmin@db02
Connected.
SQL> ed
Wrote file afiedt.buf

1 BEGIN
2 DBMS_DEFER_SYS.SCHEDULE_PUSH (
3 destination => 'DB01',
4 interval => 'SYSDATE + (1/144)',
5 next_date => SYSDATE,
6 parallelism => 1,
7 execution_seconds => 1500,
8 delay_seconds => 1200);
9* END;
SQL> /

PL/SQL procedure successfully completed.


Step-3 Create Master Group

Here I am using SCOTT schema and their objects for replicate purpose because SCOTT schema exists in Default Oracle installation.
Objects which use for replication in scott schema.
1. DEPT - table
2. EMP - table
Primary key must exists on each table which use for replication.
Note: If DB version is 10g then first unlock scott account;

Step-1 Unlock Scott Schema and reset schema password or Create New Schema for replication.
Step 2 Create the master group

SQL> conn repadmin/repadmin@db01
Connected.
SQL> begin
2 dbms_repcat.create_master_repgroup (
3 gname => 'scott_repg');
4 end;
5 /

PL/SQL procedure successfully completed.
Note: This step must be completed by the replication administrator.

Step 3 Add objects to master group.

SQL> ed
Wrote file afiedt.buf

1 BEGIN
2 DBMS_REPCAT.CREATE_MASTER_REPOBJECT (
3 gname => 'scott_repg',
4 type => 'TABLE',
5 oname => 'EMP',
6 sname => 'scott',
7 use_existing_object => TRUE,
8 copy_rows => FALSE);
9* END;
SQL> /

PL/SQL procedure successfully completed.

SQL> ed
Wrote file afiedt.buf

1 BEGIN
2 DBMS_REPCAT.CREATE_MASTER_REPOBJECT (
3 gname => 'scott_repg',
4 type => 'TABLE',
5 oname => 'DEPT',
6 sname => 'scott',
7 use_existing_object => TRUE,
8 copy_rows => FALSE);
9* END;
SQL> /

PL/SQL procedure successfully completed.

Note: Use the CREATE_MASTER_REPOBJECT procedure to add an object to your master group. In most cases, you probably will be adding tables and indexes to your master group, but you can also add procedures, views, synonyms, and so on.
Step 4 Add additional master sites.
SQL> ed
Wrote file afiedt.buf

1 BEGIN
2 DBMS_REPCAT.ADD_MASTER_DATABASE (
3 gname => 'scott_repg',
4 master => 'DB02',
5 use_existing_objects => TRUE,
6 copy_rows => FALSE,
7 propagation_mode => 'ASYNCHRONOUS');
8* END;
SQL> /

PL/SQL procedure successfully completed.
Note: You should wait until db02 appears in the DBA_REPSITES view before continuing. Execute the following SELECT statement in another SQL*Plus session to make sure that db02 has appeared:
SQL> conn repadmin/repadmin@db01
Connected.
SQL> SELECT DBLINK FROM DBA_REPSITES WHERE GNAME = 'SCOTT_REPG';
DBLINK
------------------------------------------------------------------
DB01
DB02
Note:
1. In this example, the use_existing_objects parameter in the ADD_MASTER_DATABASE procedure is set to TRUE because it is assumed that the scott schema already exists at all master sites. In other words, it is assumed that the objects in the scott schema are precreated at all master sites. Also, the copy_rows parameter is set to FALSE because it is assumed that the identical data is stored in the tables at each master site.
2. Here I am using replication Process is “ASYNCHRONOUS”.
3. Here I am skip “conflict resolution methods”
Note: Repeat Step-3 at DB02 master site.
Step 5 Generate replication support.
SQL> ed
Wrote file afiedt.buf

1 BEGIN
2 DBMS_REPCAT.GENERATE_REPLICATION_SUPPORT (
3 sname => 'scott',
4 oname => 'emp',
5 type => 'TABLE',
6 min_communication => TRUE);
7* END;
SQL> /

PL/SQL procedure successfully completed.

SQL> ed
Wrote file afiedt.buf

1 BEGIN
2 DBMS_REPCAT.GENERATE_REPLICATION_SUPPORT (
3 sname => 'scott',
4 oname => 'dept',
5 type => 'TABLE',
6 min_communication => TRUE);
7* END;
SQL> /

PL/SQL procedure successfully completed.
Note: You should wait until the DBA_REPCATLOG view is empty before resuming master activity. Execute the following SELECT statement to monitor your DBA_REPCATLOG view:

SQL> SELECT COUNT(*) FROM DBA_REPCATLOG WHERE GNAME = 'SCOTT_REPG';

COUNT(*)
----------
0
Step-6 Start replication.
After creating your master group, adding replication objects, generating replication support, and adding additional master databases, you need to start replication activity. Use the RESUME_MASTER_ACTIVITY procedure to "turn on" replication for the specified master group.
SQL> ed
Wrote file afiedt.buf

1 BEGIN
2 DBMS_REPCAT.RESUME_MASTER_ACTIVITY (
3 gname => 'scott_repg');
4* END;
SQL> /
PL/SQL procedure successfully completed.
Here we complete multimaster replication procedure.
Now let check what we did is correct or not :)
Connect with scott to db01 and insert new rows in dept table and commit;
SQL> conn scott/tiger@db01
Connected.
SQL> select * from dept;

DEPTNO DNAME LOC
---------- -------------- -------------
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON

SQL> insert into dept values (50,'DUMMY','DUMMY');

1 row created.
SQL> commit;
Commit complete.
SQL> select * from dept;

DEPTNO DNAME LOC
---------- -------------- -------------
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON
50 DUMMY DUMMY--- New row

--Wait some time ...and connect scott to db02 database and check that new row replicate in dept table.
SQL> conn scott/tiger@db02
Connected.
SQL> select * from dept;

DEPTNO DNAME LOC
---------- -------------- -------------
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON
50 DUMMY DUMMY --- New row
Note: you can perform any DML operation on Master Definition site “db01” and commit. Changes will update in DB02 site and if you perform any changes in DB02 db then update will changes in DB01 vice versa.
SQL> conn scott/tiger@db02
Connected.
SQL> select count(*) from dept;

COUNT(*)
----------
5
--Delete One row in dept table “db02”.
SQL> delete dept where dname = 'DUMMY';

1 row deleted.

SQL> commit;

Commit complete.

SQL> select count(*) from dept;

COUNT(*)
----------
4

SQL> disconn
SQL> conn scott/tiger@db01
Connected.
SQL> select count(*) from dept;

COUNT(*)
----------
4


Altering a Replicated Object



Step 1 Connect to the master definition site as the replication administrator.

SQL> conn repadmin/repadmin@db01
Connected.
Step 2 If necessary, then quiesce the master group.
SQL> ed
Wrote file afiedt.buf

1 BEGIN
2 DBMS_REPCAT.SUSPEND_MASTER_ACTIVITY (
3 gname => 'scott_repg');
4* END;
5 /

PL/SQL procedure successfully completed.
Step 3 In a separate SQL*Plus session, check the status of the master group you are quiescing.
Do not proceed until the group's status is QUIESCED.
SQL> SELECT GNAME, STATUS FROM DBA_REPGROUP;

GNAME STATUS
------------------------------ ---------
SCOTT_REPG QUIESCING
Step 4 Alter the replicated object.
SQL> ED
Wrote file afiedt.buf

1 BEGIN
2 DBMS_REPCAT.ALTER_MASTER_REPOBJECT (
3 sname => 'SCOTT',
4 oname => 'dept',
5 type => 'TABLE',
6 ddl_text => 'ALTER TABLE SCOTT.DEPT ADD (timestamp DATE)');
7* END;
SQL> /

PL/SQL procedure successfully completed.
Note: Add Timestamp column in scott.dept@db01 table
Step 5 Regenerate replication support for the altered object.
SQL> ed
Wrote file afiedt.buf

1 BEGIN
2 DBMS_REPCAT.GENERATE_REPLICATION_SUPPORT (
3 sname => 'scott',
4 oname => 'dept',
5 type => 'TABLE',
6 min_communication => TRUE);
7* END;
8 /

PL/SQL procedure successfully completed.
Step 6 In a separate SQL*Plus session, check if DBA_REPCATLOG is empty.
SQL> select count(*) from dba_repcatlog;

COUNT(*)
----------
0
Note: Do not proceed until this view is empty.
Step 7 Resume replication activity.
SQL> ed
Wrote file afiedt.buf

1 BEGIN
2 DBMS_REPCAT.RESUME_MASTER_ACTIVITY (
3 gname => 'scott_repg');
4* END;
SQL> /

PL/SQL procedure successfully completed.
Now check dept table in both database at “db01” or “db02”.
SQL> conn scott/tiger@db01
Connected.
SQL> desc dept
Name Null? Type
----------------------------------------- -------- -------------------
DEPTNO NOT NULL NUMBER(2)
DNAME VARCHAR2(14)
LOC VARCHAR2(13)
TIMESTAMP DATE

SQL> conn scott/tiger@db02
Connected.
SQL> desc dept
Name Null? Type
----------------------------------------- -------- -------------------
DEPTNO NOT NULL NUMBER(2)
DNAME VARCHAR2(14)
LOC VARCHAR2(13)
TIMESTAMP DATE


Listing the Master Sites Participating in a Master Group


Here I created two DB, one is master definition site and other is Master site.
Through below query we can fintout which one is master definition site or which is master site.
SQL> conn repadmin/repadmin@db01
Connected.
SQL> COLUMN GNAME HEADING 'Master Group' FORMAT A20
SQL> COLUMN DBLINK HEADING 'Sites' FORMAT A25
SQL> COLUMN MASTERDEF HEADING 'Master|Definition|Site?' FORMAT A10
SQL> SELECT GNAME, DBLINK, MASTERDEF
2 FROM DBA_REPSITES
3 WHERE MASTER = 'Y'
4 AND GNAME NOT IN (SELECT GNAME FROM DBA_REPSITES WHERE SNAPMASTER = 'Y'
)
5 ORDER BY GNAME;
Master
Definition
Master Group Sites Site?
-------------------- ------------------------- ----------
SCOTT_REPG DB02 N
SCOTT_REPG DB01 Y
Note: DB02 is master site and DB01 is master definition site.

Monday, November 5, 2007

Advance Replication [Part-I Introduction]

Dear Friends,

We can divide advance replication in three parts for better understanding.

1. Part-I Introduction
2. Part-II Multimaster Replication
3. Part-III Materialized view replication

Part-I Introduction

What is Advance Replication?
Replication is the process of copying and maintaining database objects, such as tables, in multiple databases that make up a distributed database system. Changes applied at one site are captured and stored locally before being forwarded and applied at each of the remote locations. Advanced Replication is a fully integrated feature of the Oracle server; it is not a separate server.

Replication uses distributed database technology to share data between multiple sites, but a replicated database and a distributed database are not the same. In a distributed database, data is available at many locations, but a particular table resides at only one location. For example, the employees table resides at only the ny.world database in a distributed database system that also includes the hk.world and la.world databases. Replication means that the same data is available at multiple locations. For example, the employees table is available at ny.world, hk.world, and la.world.

Some of the common reasons for using replication are:

Availability
Replication improves the availability of applications because it provides them with alternative data access options. If one site becomes unavailable, then users can continue to query or even update the remaining locations. In other words, replication provides excellent failover protection.
Performance
Replication provides fast, local access to shared data because it balances activity over multiple sites. Some users can access one server while other users access different servers, thereby reducing the load at all servers. Also, users can access data from the replication site that has the lowest access cost, which is typically the site that is geographically closest to them.
Disconnected computing
A materialized view is a complete or partial copy (replica) of a target table from a single point in time. Materialized views enable users to work on a subset of a database while disconnected from the central database server. Later, when a connection is established, users can synchronize (refresh) materialized views on demand. When users refresh materialized views, they update the central database with all of their changes, and they receive any changes that may have happened while they were disconnected.
Network load reduction
Replication can be used to distribute data over multiple regional locations. Then, applications can access various regional servers instead of accessing one central server. This configuration can reduce network load dramatically.


Architecture of advance replication


There are two types of Advance Replication

1. Multimaster Replication
2. Materialized view replication

1. Multimaster replication
Start --- > set up master site --- > Select object for replication --- > Create Group for selected objects --- > if conflict possible then configure conflict resolution ---> end

2. Materialized view replication
Start --- > set up master site -- > set up materialized view site --- > create materialized view group --- > end

Note: In Detail explanation is Part-II, III.

Replication Sites
A replication group can exist at multiple replication sites. Replication environments support two basic types of sites: master sites and materialized view sites. One site can be both a master site for one replication group and a materialized view site for a different replication group. However, one site cannot be both the master site and the materialized view site for the same replication group.

The differences between master sites and materialized view sites are the following:
Replication Group
1. A replication group at a master site is more specifically referred to as a master group.
2. A replication group at a materialized view site is based on a master group and is more specifically referred to as a materialized view group.

Replication Objects
1. A master site maintains a complete copy of all objects in a replication group,
Example: if the hr_repg master group contains the table’s employees and departments, then all of the master sites participating in a master group must maintain a complete copy of employees and departments.

2. Materialized views at a materialized view site can contain all or a subset of the table data within a master group.
Example: one materialized view site might contain only a materialized view of the employees table, while another materialized view site might contain materialized views of both the employees and departments tables.

Communication with master sites
1. All master sites in a multimaster replication environment communicate directly with one another to continually propagate data changes in the replication group
2. Materialized view sites contain an image, or materialized view, of the table data from a certain point in time.

Replication Objects
A replication object is a database object existing on multiple servers in a distributed database system. In a replication environment, any updates made to a replication object at one site are applied to the copies at all other sites. Advanced Replication enables you to replicate the following types of objects:
• Tables
• Indexes
• Views and Object Views
• Packages and Package Bodies
• Procedures and Functions
• User-Defined Types and Type Bodies
• Triggers
• Synonyms
• Index types
• User-Defined Operators

Replication Groups
In a replication environment, Oracle manages replication objects using replication groups. A replication group is a collection of replication objects that are logically related.
By organizing related database objects within a replication group, it is easier to administer many objects together. Typically, you create and use a replication group to organize the schema objects necessary to support a particular database application. However, replication groups and schemas do not need to correspond with one another. A replication group can contain objects from multiple schemas, and a single schema can have objects in multiple replication groups. However, each replication object can be a member of only one replication group.


Part-II

Thursday, November 1, 2007

Materialized view for Data Warehouse

Dear friends,

Did you hear about Materialized view?
Ans: Yes, No or May be.

Now I will explain you in details

What is materialized view?

1. Materialized view is normal database object like “table,index”
2. It is basically use for Data warehouse or Replication purpose.
3. Snapshot is synonym for materialized view.
4. A materialized view can be stored in the same database as its base tables or in a different database
5. A materialized view provides access to table data by storing the results of a query in a separate schema object. Unlike an ordinary view, which does not take up any storage space or contain any data, a materialized view contains the rows resulting from a query against one or more base tables or views.
6. A materialized view improve response time through query rewrite or reduce execution time.

Note: Query rewrite is a mechanism where Oracle or applications from the end user or database transparently improve query response time, by automatically rewriting the SQL query to use the materialized view instead of accessing the original tables. Query rewrites are particularly useful in a data warehouse environment.

What is syntax for materialized view?

Create materialized view “materialized_view_name”
Build [immediate / deferred]
Refresh [fash [on commit / demand]]
Enable query rewrite
As
Sql statement
/

Now Details explanation about syntax
1. Create materialized view “name’
I hope it is understood because self explanatory

2. Build [immediate / deferred]
1. Build immediate -- > materialized view populate with data on creation time from tables.
2. Build deferred --- > materialized view not populate with data on creation time, we have to populate later.

Eg:
With build immediate



create materialized view mymview1
build immediate
as
select ename,sum(sal) from emp group by ename;

SQL> select count(*) from emp;

COUNT(*)
----------
14

Note: materialized view populate with data on creation time.

With build deferred
create materialized view mymview2
build deferred
as
select ename,sum(sal) from emp group by ename;

SQL> select count(*) from mymview2;

COUNT(*)
----------
0

Note: materialized view not populate with data on creation time, we have to populate manually through DBMS_MVIEW.RERESH package.


3. Refresh [fast [on commit / demand]]
Refresh is method for populate materialized view with data whenever made changes in their base tables.

There are five refresh method for materialized view
1. refresh fast (materialized view log needed)
2. refresh on commit
3. refresh on demand

What is materialized view log?
A materialized view log is a schema object that records changes to a master table's data so that a materialized view defined on the master table can be refreshed incrementally.
Each materialized view log is associated with a single master table. The materialized view log resides in the same database and schema as its master table.

Eg:
With refresh on commit


SQL> select count(*) from emp;
COUNT(*)
----------
14
SQL> create materialized view MYVIEW1
2 build immediate
3 refresh on commit
4 enable query rewrite
5 as
6 select ename, sum(sal) from emp group by ename;
Materialized view created.

SQL> select count(*) from myview1;
COUNT(*)
----------
14
SQL> insert into emp (empno,ename,sal) values(1754,'TAJ',2500);
1 row created.
SQL> commit;
Commit complete.
SQL> select count(*) from myview1;
COUNT(*)
----------
15
With refresh on commit
SQL> select count(*) from emp;

COUNT(*)
----------
15

SQL> create materialized view MYVIEW5
2 build immediate
3 refresh on demand
4 enable query rewrite
5 as
6 select ename,sum(sal) from emp group by ename;

Materialized view created.

SQL> select count(*) from myview5;

COUNT(*)
----------
15

SQL> insert into emp (empno,ename,sal)values(1100,'xyz',1000);

1 row created.

SQL> commit;

Commit complete.

SQL> select count(*) from emp;

COUNT(*)
----------
16

SQL> select count(*) from myview5;

COUNT(*)
----------
15

SQL> exec dbms_mview.refresh('MYVIEW5');

PL/SQL procedure successfully completed.

SQL> select count(*) from myview5;

COUNT(*)
----------
16

Whenever any changes made in base tables and perform commit then materialized view refreshed.

4. Enable query rewrite
A materialized view is like a query with a result that is materialized and stored in a table. When a user query is found compatible with the query associated with a materialized view, the user query can be rewritten in terms of the materialized view. This technique improves the execution of the user query, because most of the query result has been precomputed. The query transformer looks for any materialized views that are compatible with the user query and selects one or more materialized views to rewrite the user query. The use of materialized views to rewrite a query is cost-based. That is, the query is not rewritten if the plan generated without the materialized views has a lower cost than the plan generated with the materialized views.
Note: query_rewrite_enabled=true AND query_rewrite_integrity=enforced must be set to use query rewrite feature

5. How Materialized view improve query performance.

Let’s check with demo

1. Create Big Table :)
SQL> create table big
2 as select * from all_objects
3 union all
4 select * from all_objects
5 union all
6 select * from all_objects
7 union all
8 select * from all_objects
9 union all
10 select * from all_objects;

Table created.

SQL> select count(*) from big;

COUNT(*)
----------
188995

2. Now execute below query and check query statistics
SQL> alter system flush shared_pool;

System altered.

SQL> set autotrace traceonly
SQL> set timing on
SQL> select owner,count(*) from big group by owner;

12 rows selected.

Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=624 Card=188024 By
tes=3196408)

1 0 SORT (GROUP BY) (Cost=624 Card=188024 Bytes=3196408)
2 1 TABLE ACCESS (FULL) OF 'BIG' (TABLE) (Cost=598 Card=1880
24 Bytes=3196408)





Statistics
----------------------------------------------------------
957 recursive calls
0 db block gets
2844 consistent gets---- > too big :)
0 physical reads
0 redo size
1028 bytes sent via SQL*Net to client
504 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
10 sorts (memory)
0 sorts (disk)
12 rows processed

3. Now Create materialized view and try
SQL> create materialized view MYVIEWBIG
2 build immediate
3 refresh on commit
4 enable query rewrite
5 as
6 select owner,count(*) from big group by owner;

Materialized view created.

SQL> alter system flush shared_pool;

System altered.

SQL> set autotrace traceonly
SQL> set timing on
SQL> select owner,count(*) from big group by owner;

12 rows selected.

Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=12 Bytes=36
0)

1 0 MAT_VIEW REWRITE ACCESS (FULL) OF 'MYVIEWBIG' (MAT_VIEW RE
WRITE) (Cost=3 Card=12 Bytes=360)





Statistics
----------------------------------------------------------
7356 recursive calls
0 db block gets
1313 consistent gets --- > just less then half :)
0 physical reads
0 redo size
1028 bytes sent via SQL*Net to client
504 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
49 sorts (memory)
0 sorts (disk)
12 rows processed


4. What Privileges Need for materialized view ?

To create a materialized view in your own schema, you must have the CREATE MATERIALIZED VIEW privilege and the SELECT privilege to any tables referenced that are in another schema. To create a materialized view in another schema, you must have the CREATE ANY MATERIALIZED VIEW privilege and the owner of the materialized view needs SELECT privileges to the tables referenced if they are from another schema. Moreover, if you enable query rewrite on a materialized view that references tables outside your schema, you must have the GLOBAL QUERY REWRITE privilege or the QUERY REWRITE object privilege on each table outside your schema.

5. How to check Status of a Materialized View?

SQL> select mview_name,query,rewrite_enabled,refresh_mode,build_mode,staleness,c
ompile_state
2 from user_mviews;

MVIEW_NAME QUERY R REFRES BUILD_MOD STALENESS COMPILE_ST
---------- -------------------- - ------ --------- ---------- ----------

MYBIG select owner, count( Y DEMAND IMMEDIATE NEEDS_COMP NEEDS_COMP
*) from big group by ILE ILE
owner

MYMVIEW2 select ename,sum(sal N DEMAND DEFERRED UNUSABLE NEEDS_COMP
) from emp group by ILE
ename

MYVIEW5 select ename,sum(sal Y DEMAND IMMEDIATE FRESH VALID
) from emp group by
ename

MYMVIEW3 select ename,sum(sal Y COMMIT IMMEDIATE UNUSABLE VALID
) from emp group by
ename

MYVIEW1 select ename, sum(sa Y COMMIT IMMEDIATE FRESH VALID
l) from emp group by

Explanation:
MVIEW_NAME -------- > Materialized View name
QUERY ------------- > Query that defines the materialized view
REFRESH_ENABLED --- > If query_rewrite_enabled=TRUE then value is “Y” otherwise “N”
REFRESH_MODE ------ > DEMAND, COMMIT, NEVER
BUILD_MODE -------- > IMMEDIATE, DEFERRED
STALENESS --------- > Relationship between the contents of the materialized view and the contents of the materialized view's masters.
COMPILE_STATE ----- > Validity of the materialized view with respect to the objects upon which it depends
VALID -> No objects depends has changed since the last validation
NEED_COMPILE -> Some objects depends materialized view has changed an “alter materialized view … compile;” statement is required to validate this materialized view

Wednesday, October 31, 2007

Why I Choose Oracle as my Career ?

Dear Friends,

There are some question which asked frequently on forms from every oracle newbie.

1. Which Oracle Certification I choose ?
Here I will explain in details.

There types of Oracle Certification we can do.
1. Oracle DBA
2. Oracle Developer
3. Oracle Application DBA

1. Oracle DBA
The person which maintain Oracle Database called Oracle Database Administrator.
There are some frequent work for Oracle DBA
1. Database Backup Responsibity.
2. Database Performance Tunning
3. Database Basic Administration Like " user creation, database security"
Above work is complete dedicated to Oracle DBA and every company which use Oracle Database they always need Oracle DBA.

Certification: Oracle 9i
http://dbataj.blogspot.com/2007/08/oracle-9i-certified-professional.html

Certification: Oracle 10g
http://dbataj.blogspot.com/2007/03/oracle-certified-professional.html2. Oracle Developer
The person which create coding and design forms and report for front-end application
There are some frequent work for Oracle Developer
1. Coding
2. Create and Design Forms and Reports
3. SQL Tunning
Above work is complete dedicated to Oracle Developer

Certification: Oracle9i,10g
http://dbataj.blogspot.com/2007/08/oracle-forms-developer-certified.html


3. Oracle Application DBA

The Person which work on ERP (enterprise resource planning)
There are some frequent work for Oracle Application DBA
1. Implement Oracle ERP according company requirement.
2. Patching and Clonning
3. Maintain Application Module like "financial,hr etc"

Certification: Oracle10g
http://dbataj.blogspot.com/2007/08/oracle-11i-apps-dba.html
________________________________________________________________

2. From where I get Real Time Exprience ?

This is very most important and difficult question :)

because without exprience we can't get job or without job we can't get real time exprience :)

So First understand what is real time exprience ?
Read Time exprience means where so many users working in your Oracle DB and DBA have to maintain that database, if any error comes DBA have to fix as soon as possible.

This exprience we can't get without Job ?
Ans: Yes,

Now if we remove REAL TIME word in exprience then we can get only exprience without Job ?
Ans: Yes,

How ? i will show you.

First understand what is exprience ?
Exprience means we have to good concept with some practical knowledge in Oracle DB.

Now Question is how can get good concept knowledge with some pratical knowledge ?

1. Buy On PC with minimum 512 RAM with 80 harddisk.
2. Install Oracle 10g Database or Oracle 11g
It is free you can download here
http://www.oracle.com/technology/software/products/database/index.html

Note: Oracle 11g is also released on Windows or Linux Platforms

3. Download below books from Oracle Documentation
1. Oracle Concept
http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/toc.htm

2. Oracle Administrator Guide
http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/toc.htm

3. Oracle Sql Reference Guide
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/toc.htm

4. Oracle Performance Tunning Guide
http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/toc.htm

4. Now start reading with step by step first read 1 pointed books then 2 and so on.

5. If you face in problem means any error message or anything which you don't understand then please refer oracle forms.
1. Oracle OTN forms
http://forums.oracle.com/forums/category.jspa?categoryID=18
2. OraFAQ forms
http://www.orafaq.com/forum/
3. Asktom
http://asktom.oracle.com/pls/asktom/f?p=100:1:4506654324364228

4. Some Oracle guru's Blog
http://jonathanlewis.wordpress.com/
http://tkyte.blogspot.com/
http://tonguc.wordpress.com/
http://jaffardba.blogspot.com

I am not Oracle Guru's but visit my blog also :)
http://dbataj.blogspot.com
http://babudba.blogspot.com
http://sabarsyed.blogspot.com
5. Some good Site
http://www.akadia.com/
http://www.adp-gmbh.ch/
http://www.psoug.org/

After Follow above instruction you will be very good Oracle Knowledgable person.
________________________________________________________________

3. How can I get Oracle Job ?

What you think after complete Second point (above) you will not get job ?
Ans: NO, we will get job...hurrrey :)

But also keep in mind below things when you going for interview
1. Make sure you have very good RESUME/CV created.
If anybody want to know about how to create CV for oracle please email me at asktaj i will check and correct your CV.

2. First decide In which subject you are very powerful ?
1. Administration
2. Backup and Recovery
3. Performance Tunning

3. Good Dress Code

________________________________________________________________

Hope this article give you good understand about Oracle Certification , Exprience and Job.

Please give me your comments.
Thank You

Sunday, October 28, 2007

Oracle Latch


My Signature Article



Tips:
Oracle Certification - Tips
DBA Preparation - Tips
Practical DBA - Tips

What is Latch ?

A mechanism to protect shared data structures in the System Global Area.
For Example: latches protect the list of users currently accessing the database and protect the data structures describing the blocks in the buffer cache.

A server or background process acquires a latch for a very short time while manipulating or looking at one of these structures.


During DB performance we will see LATCH event ...so what is latch event and how many types of latch events ?

A latch is a low-level internal lock used by Oracle to protect memory structures.

The latch free event is updated when a server process attempts to get a latch, and the latch is unavailable on the first attempt.

Most Popular latch wait event are ...



1. Latch: library cache or Latch: shared pool

Below is Possible causes for above both latch events.

1. Lack of statement reuse
2. Statements not using bind variables
3. Insufficient size of application cursor cache
4. Cursors closed explicitly after each execution
5. Frequent logon/logoffs
6. Underlying object structure being modified (for example truncate)
7. Shared pool too small

Below is Possible suggestion for aviod above both latch events.

1. Increase SHARED_POOL_SIZE parameter value.
2. Modify Frontend application to use BIND VARIABLE
3. Use CURSOR_SHARING='force' (for temporary basis)


2. Latch: cache buffers lru chain

Possible Causes

1. Inefficient SQL that accesses incorrect indexes iteratively (large index range scans) or many full table scans.
2. DBWR not keeping up with the dirty workload; hence, foreground process spends longer holding the latch looking for a free buffer
3. Cache may be too small

Possible Suggestion

1. Look for: Statements with very high logical I/O or physical I/O, using unselective indexes
2. Increase DB_CACHE_SIZE parameter value.
3. The cache buffers lru chain latches protect the lists of buffers in the cache. When adding, moving, or removing a buffer from a list, a latch must be obtained.

For symmetric multiprocessor (SMP) systems, Oracle automatically sets the number of LRU latches to a value equal to one half the number of CPUs on the system. For non-SMP systems, one LRU latch is sufficient.

Contention for the LRU latch can impede performance on SMP machines with a large number of CPUs. LRU latch contention is detected by querying V$LATCH, V$SESSION_EVENT, and V$SYSTEM_EVENT. To avoid contention, consider tuning the application, bypassing the buffer cache for DSS jobs, or redesigning the application.


Latch: cache buffers chains

Possible Causes

1. Repeated access to a block (or small number of blocks), known as a hot block
2. From AskTom:

Contention for these latches can be caused by:

- Very long buffer chains.
- very very heavy access to the same blocks.

Possible Suggestion

1. From AskTom:
When I see this, I try to see what SQL the waiters are trying to execute. Many times,
what I find, is they are all running the same query for the same data (hot blocks). If
you find such a query -- typically it indicates a query that might need to be tuned (to
access less blocks hence avoiding the collisions).

If it is long buffer chains, you can use multiple buffer pools to spread things out. You
can use DB_BLOCK_LRU_LATCHES to increase the number of latches. You can use both
together.

The cache buffers chains latches are used to protect a buffer list in the buffer cache. These latches are used when searching for, adding, or removing a buffer from the buffer cache. Contention on this latch usually means that there is a block that is greatly contended for (known as a hot block).

To identify the heavily accessed buffer chain, and hence the contended for block, look at latch statistics for the cache buffers chains latches using the view V$LATCH_CHILDREN. If there is a specific cache buffers chains child latch that has many more GETS, MISSES, and SLEEPS when compared with the other child latches, then this is the contended for child latch.

This latch has a memory address, identified by the ADDR column. Use the value in the ADDR column joined with the X$BH table to identify the blocks protected by this latch. For example, given the address (V$LATCH_CHILDREN.ADDR) of a heavily contended latch, this queries the file and block numbers:

SELECT OBJ data_object_id, FILE#, DBABLK,CLASS, STATE, TCH
FROM X$BH
WHERE HLADDR = 'address of latch'
ORDER BY TCH;

X$BH.TCH is a touch count for the buffer. A high value for X$BH.TCH indicates a hot block.

Many blocks are protected by each latch. One of these buffers will probably be the hot block. Any block with a high TCH value is a potential hot block. Perform this query a number of times, and identify the block that consistently appears in the output. After you have identified the hot block, query DBA_EXTENTS using the file number and block number, to identify the segment.

After you have identified the hot block, you can identify the segment it belongs to with the following query:

SELECT OBJECT_NAME, SUBOBJECT_NAME
FROM DBA_OBJECTS
WHERE DATA_OBJECT_ID = &obj;

In the query, &obj is the value of the OBJ column in the previous query on X$BH.


5. Latch: row cache objects

The row cache objects latches protect the data dictionary.
Suggestion: Increase SHARED_POOL_SIZE parameter to avoid this latch.

Thursday, October 25, 2007

System Statistics


What is system statistics ?


System statistics describe the system's hardware characteristics, such as I/O and CPU performance and utilization, to the query optimizer.

Why gather SYSTEM STATISTICS ?

When choosing an execution plan, then opitmizer estimate the I/O and CPU resources required for each query.
System statistics enable the query optimizer to more accurately estimate I/O and CPU costs, enabling the query optimizer to choose a better execution plan.

It is important to gather system statistics ?

System statistics give accurate cost for sql query so optimizer take good decision.
If you have good statistics then query take good decision and database performance is increase.

Many Oracle Guru's and Oracle Corp. Also recommended to gather system statistics.

How Gather System Statistics ?

Through DBMS_STATS package we can gather system statistics

Step1
SQL> exec dbms_stats.gather_system_stats('Start');

Step2
SQL>--Wait for some time ...it will 1 hr minimum or 2 hr or whole day according database load.

Step3
SQL exec dbms_stats.gather_system_stats('Stop');


Keep in mind.
1. User must granted DBA privilege role for gather system statistics.
2. After gather system statistics... Unlike table, index, or column statistics, Oracle does not invalidate already parsed SQL statements when system statistics get updated. All new SQL statements are parsed using new statistics.
3. Always Gather System statistics During Heavy Peak Load.


Where check gather system statistics data information ?

After Gather system statictics query sys.aux_stats$ view.

SQL> select pname, pval1 from sys.aux_stats$;

PNAME PVAL1
------------------------------ ----------
STATUS
DSTART
DSTOP
FLAGS 1
CPUSPEEDNW 904.86697
IOSEEKTIM 10
IOTFRSPEED 4096
SREADTIM .995
MREADTIM 1.701
CPUSPEED 1268
MBRC 16
MAXTHR
SLAVETHR

13 rows selected.

Note:
CPUSPEEDNW,IOSEEKTIM,IOTFRSPEED is New column comes with Oracle 10g. and these columns already populated before gather system statistics.


SREADTIM

Single block read time is the average time to read a single block randomly.

MREADTIM

Multiblock read is the average time to read a multiblock sequentially.

MBRC

Multiblock count is the average multiblock read count sequentially.

Active Session History (ASH)

Tips:
Oracle Certification - Tips

DBA Preparation - Tips
Practical DBA - Tips

Active Session History (ASH)


Oracle 10gr2 Introduce new option for capture performance problem.
ASH >>> Active session History <<<<
What is ASH ?

Whenever you want to know information about blocker and waiter identifiers and their associated transaction IDs and SQL.

About V$ACTIVE_SESSION_HISTORY


1. The V$ACTIVE_SESSION_HISTORY view provides sampled session activity in the instance.

2. Active sessions are sampled every second and are stored in a circular buffer in SGA.

3. Any session that is connected to the database and is waiting for an event that does not belong to the Idle wait class is considered as an active session.

4. This includes any session that was on the CPU at the time of sampling.

5. Each session sample is a set of rows and the V$ACTIVE_SESSION_HISTORY view returns one row for each active session per sample, returning the latest session sample rows first. Because the active session samples are stored in a circular buffer in SGA, the greater the system activity, the smaller the number of seconds of session activity that can be stored in the circular buffer. This means that the duration for which a session sample appears in the V$ view, or the number of seconds of session activity that is displayed in the V$ view, is completely dependent on the database activity.

6. Using the Active Session History enables you to examine and perform detailed analysis on both current data in the V$ACTIVE_SESSION_HISTORY view

7. Historical data in the DBA_HIST_ACTIVE_SESS_HISTORY view,
SQL identifier of SQL statement


What information provide ASH view ?


1. Object number, file number, and block number

2. Wait event identifier and parameters

3. Session identifier and session serial number

4. Module and action name

5. Client identifier of the session

6. Service hash identifier


How to generate ASH report ?

It is New feature of 10GR2(10.2.0.1.0)
For report creation we have to use ASHRPT.SQL script.
Located: In $ORACLE_HOME/rdbms/admin folder


How to run ASHRPT.SQL script


To generate a text report of ASH information, run the ashrpt.sql script at the SQL prompt:

@$ORACLE_HOME/rdbms/admin/ashrpt.sql

First, you need to specify whether you want an HTML or a text report.

Enter value for report_type: text

Specify the time frame to collect ASH information by first specifying the begin time in minutes prior to the system date.

Enter value for begin_time: -10
Note: here you have to just put number in minutes eg: 10 for 10 minutes

Next, enter the duration in minutes that the report for which you want to capture ASH information from the begin time. The default duration of system date minus begin time is accepted in the following example:

Enter value for duration:
Note: left blank for default value. Default value is SYSDATE

The report in this example will gather ASH information beginning from 10 minutes before the current time and ending at the current time. Next, accept the default report name or enter a report name. The default name is accepted in the following example:

Enter value for report_name:
Using the report name ashrpt_1_0310_0131.txt
Note: Left it blank for default value.

The session history report is generated.


For Futhere Detail please check metalink note: 243132.1

ORA-02449


ORA-02449: unique/primary keys in table referenced by foreign keys


SQL> drop tablespace users including contents and datafiles ;
drop tablespace users including contents and datafiles
*
ERROR at line 1:
ORA-02449: unique/primary keys in table referenced by foreign keys


Whenever get ORA-02449 error during drop tablespace then just use CASCADE CONSTRAINTS cluase with DROP TABLESPACE statement.



SQL> drop tablespace users including contents and datafiles cascade constraints;


Tablespace dropped.

Sunday, October 21, 2007

AWR Report/Snapshot




My Signature Article



Tips:
Oracle Certification - Tips

DBA Preparation - Tips

Practical DBA - Tips




Prior Oracle 10g we use STATSPACK report to analyze instance health for performance. but now in 10g we use AWR(automatic workload repository) report to analyze Oracle Database health.


How to generate AWR report ?


It is very easy to generate AWR report in Oracle 10g.

We have to run just one sql script to generate AWR report.


There is TWO sql script to create AWR report.
1. awrrpt.sql
If we have only One Oracle Database then run awrrpt.sql sql script.

2. awrrpti.sql
If we have more than One Oracle Instance (Like RAC) then run awrrpti.sql script so that we can particular instance for awr report creation.



Location of AWR report sql script


$ORACLE_HOME/rdbms/admin


Example Of AWR report


AWR report Example


What is default interval period between two awr report ?


AWR report generate automatically for database every 1 hr interval period.
But Recommanded is 15 Minutes is enough in two snapshot for better performance bottleneck.


How to Manage & change interval period for awr report ?


There is two to modify or changes in AWR report.
1. Manually ---> through DBMS_WORKLOAD_REPOSITORY plsql package

2. GUI ----> OEM (Oracle Enterprise Manager)
Login to OEM ---> Administration Tag ----> Workload Section --->AUTOMATIC WORKLOAD REPOSITORY


How to Interpreting with AWR report ?


Below is Main Section in AWR report for concern.

Load Profile

In this section we know about Logical Read, Physical Read, Soft Parse, Hard Parse

Instance Efficiency Percentages

Many Oracle Guru's is not recommended to trust on Instance percentage ratio. becuase suppose if your instance percentage show BUFFER CAHCE HIT RATIO is 100% it is not sign for your database is very fast and running smootly.

Top 5 Timed Events

It is very important section in AWR report.
through this we can know most five wait event is effecting database performance.

Oracle Wait Event


SQL Statistics

It this section we will know about Execution time, cpu used, logical & physical read for top most sql statement.


Advisory Statistics

In this section we will get advice for PGA, BUFFER CACHE, SHARED POOL, JAVA POOL size for better performance.


Visit following link for reference interpreting awr report

Reference I

Thursday, October 18, 2007

EXECUTION PLAN


What is execution plan ?


To run a DML statement, Oracle might need to perform many steps. Each of these steps either retrieves rows of data physically from the database or prepares them in some way for the user issuing the statement. The combination of the steps Oracle uses to run a statement is called an execution plan. An execution plan includes an access method for each table that the statement accesses and an ordering of the tables (the join order). The steps of the execution plan are not performed in the order in which they are numbered.


How to generate execution plan ?


There are three ways to view execution plan

1. Through AUTOTRACE

SQL> set autotrace trace exp
SQL> select * from emp;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=14 Bytes=51
8)

1 0 TABLE ACCESS (FULL) OF 'EMP' (TABLE) (Cost=3 Card=14 Bytes
=518)


2. Through DBMS_XPLAN package

SQL> explain plan for select * from emp;

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------

Plan hash value: 3956160932

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 14 | 518 | 3 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| EMP | 14 | 518 | 3 (0)| 00:00:01 |
--------------------------------------------------------------------------

8 rows selected.

3. Through trace file (trace event 10046,sql_trace=true) and format trace file through TKPROF utility.

Wednesday, October 17, 2007

10046 trace event


What is 10046 trace events


whenever we want to tune any sql statement and want to know about waits and bind variable ...then we can use 10046 trace events.


How to use 10046 trace events


First define trace identifier for generated trace files so we can easily identify our trace files in UDUMP folder.

SQL> alter session set tracefile_identifier='MYSESSION';

Enable 10046 tracing.

SQL> alter session set events '10046 trace name context forever, level 8';

Now execute SQL Statements...

select e.empno, e.ename, d.dname, d.deptno
from emp e , dept d
where e.deptno = d.deptno;

Disable 10046 tracing.

SQL> alter session set events '10046 trace name context off';

Check UDUMP directory for generated trace file.
file name like "XXXX_MYSESSION.TRC"

C:\Oracle\admin\ora9i\udump>dir *MYSESSION.trc

Now generated trace file is RAW trace file and very hard to read and understand ...so through TKPROF utility create readable output file for generated trace file.

C:\Oracle\admin\ora9i\udump>tkprof ORA01904_MYSESSION.TRC c:\output.log

TKPROF: Release 10.1.0.5.0 - Production on Wed Oct 17 19:01:44 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Open c:\output.log file and analyze it. below is content of output.log file

select e.empno, e.ename, d.dname, d.deptno
from emp e , dept d
where e.deptno = d.deptno


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.29 0.31 2 0 2 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 3 30 2 14
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.29 0.31 5 30 4 14


Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: 66


---Below is execution plan

Rows Row Source Operation
------- ---------------------------------------------------
14 NESTED LOOPS
14 TABLE ACCESS FULL EMP
14 TABLE ACCESS BY INDEX ROWID DEPT
14 INDEX UNIQUE SCAN (object id 32119)



---Waits time information.


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
db file sequential read 3 0.00 0.00
SQL*Net message from client 2 56.49 56.51





10046 Trace Level


Level 1
Basic trace level. Like the standard SQL_TRACE trace file. Provides statistics for parse, execute, fetch, commit and rollback database calls.

Level 4
Displays bind variables

Level 8
Displays wait statistics

Level 12
Displays wait statistics and bind variables

Tuesday, October 16, 2007

Autotrace in SQLPLUS


What is AUTOTRACE


In SQL*Plus you can automatically get a report on the execution path used by the SQL optimizer and the statement execution statistics. The report is generated after a successful SQL DML statement, such as SELECT, DELETE, UPDATE or INSERT. It is useful for monitoring and tuning the performance of these DML statements.


How to configure AUTOTRACE in SQLPLUS for database user


1. Change directory path and connect with SYSTEM user to database
C:\>cd c:\oracle\product\10.1.0\db_1\rdbms\admin

C:\Oracle\product\10.1.0\Db_1\RDBMS\ADMIN>sqlplus /nolog

SQL*Plus: Release 9.0.1.0.1 - Production on Tue Oct 16 17:08:20 2007

(c) Copyright 2001 Oracle Corporation. All rights reserved.

SQL> conn system/manager
Connected.

2. run UTLXPLAN.SQL script for plan table.
SQL> @utlxplan.sql

3. Create public synonym for plan table or grant all privilege.
SQL> create public synonym plan_table for plan_table;

SQL> grant all on plan_table to public;

Grant succeeded.

4. Exit and again change directory path and connect with SYS user to database
SQL> exit
Disconnected from Oracle9i Enterprise Edition Release 9.0.1.1.1 - Production
With the Partitioning option
JServer Release 9.0.1.1.1 - Production

C:\Oracle\product\10.1.0\Db_1\RDBMS\ADMIN>cd\

C:\>cd c:\oracle\product\10.1.0\db_1\sqlplus\admin

C:\Oracle\product\10.1.0\Db_1\sqlplus\admin>sqlplus /nolog

SQL*Plus: Release 9.0.1.0.1 - Production on Tue Oct 16 17:12:07 2007

(c) Copyright 2001 Oracle Corporation. All rights reserved.

SQL> conn sys as sysdba
Enter password:
Connected.

5. run plustrce script and script must be run with SYS user.
SQL> @plustrce
SQL>
SQL> drop role plustrace;

Role dropped.

SQL> create role plustrace;

Role created.

SQL>
SQL> grant select on v_$sesstat to plustrace;

Grant succeeded.

SQL> grant select on v_$statname to plustrace;

Grant succeeded.

SQL> grant select on v_$mystat to plustrace;

Grant succeeded.

SQL> grant plustrace to dba with admin option;

Grant succeeded.

SQL> set echo off

6. Grant plustrace role to public.
SQL> grant plustrace to public;

Grant succeeded.


Configuring the Autotrace Report


SET AUTOTRACE OFF - No AUTOTRACE report is generated. This is the
default.
SET AUTOTRACE ON EXPLAIN - The AUTOTRACE report shows only the optimizer
execution path.
SET AUTOTRACE ON STATISTICS - The AUTOTRACE report shows only the SQL
statement execution statistics.
SET AUTOTRACE ON - The AUTOTRACE report includes both the
optimizer execution path and the SQL
statement execution statistics.
SET AUTOTRACE TRACEONLY - Like SET AUTOTRACE ON, but suppresses the
printing of the user's query output, if any.


Database Statistics for SQL Statements


recursive calls
Number of recursive calls generated at both the user and system level. Oracle maintains tables used for internal processing. When Oracle needs to make a change to these tables, it internally generates an internal SQL statement, which in turn generates a recursive call.

db block gets
Number of times a CURRENT block was requested.

consistent gets
Number of times a consistent read was requested for a block.

physical reads
Total number of data blocks read from disk. This number equals the value of "physical reads direct" plus all reads into buffer cache.

redo size
Total amount of redo generated in bytes.

bytes sent via SQL*Net to client
Total number of bytes sent to the client from the foreground processes.

bytes received via SQL*Net from client
Total number of bytes received from the client over Oracle Net.

SQL*Net roundtrips to/from client
Total number of Oracle Net messages sent to and received from the client.

sorts (memory)
Number of sort operations that were performed completely in memory and did not require any disk writes.

sorts (disk)
Number of sort operations that required at least one disk write.

rows processed
Number of rows processed during the operation.


EXAMPLE


SQL> set autotrace on
SQL> select * from dept;

DEPTNO DNAME LOC
---------- -------------- -------------
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'DEPT'




Statistics
----------------------------------------------------------
0 recursive calls
2 db block gets
2 consistent gets
0 physical reads
0 redo size
702 bytes sent via SQL*Net to client
503 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
4 rows processed

SQL> set autotrace traceonly
SQL> select * from dept;


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'DEPT'




Statistics
----------------------------------------------------------
0 recursive calls
2 db block gets
2 consistent gets
0 physical reads
0 redo size
702 bytes sent via SQL*Net to client
503 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
4 rows processed

SQL> set autotrace trace explain
SQL> select * from dept;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'DEPT'



SQL> set autotrace trace statistics
SQL> select * from dept;


Statistics
----------------------------------------------------------
0 recursive calls
2 db block gets
2 consistent gets
0 physical reads
0 redo size
702 bytes sent via SQL*Net to client
503 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
4 rows processed