Search This Blog

Wednesday, July 30, 2008

Oracle Data Pump 11g

Data Pump utility enhanced in Oracle 11g.

Oracle 11g introduce some new features for Data Pump utility, I tested some and want to share with you all.

For EXPDP utility

1. compression parameter

If we have issue with export dumpfile size then in 11g it can be gone through compression parameter.

so we don't need to any other third party tool to compress exported dumpfile because compression parameter dumpfile automatically compress during export.

Syntax:

COMPRESSION=[all , metadata_only, data_only, NONE]

Default: metadata_only

I tested compression option in same schema, first time i export schema without compress and second time with compress parameter see the difference. both dumpfile exported with COMPRESSION=ALL parameter

29-07-2008 07:10 PM 1,169,989,632 A1withoutCMPR.DMP

29-07-2008 07:15 PM 317,968,384 A2withCMPR.DMP

2. encryption & encryption_password parameter

This is most useful parameter becuase through this we can provide better security to exported dumpfile from unauthorized access.

Syntax:

ENCRYPTION=[all, data_only, encrypted_columns_only, metadata_only, NONE]

Default: NONE

ENCRYPTION_PASSWORD=there is no default; Value must be user provided

3. remap_data parameter

This is another useful options for export utility.

remap_data parameter allow us to specify a remap function that takes as a source the original value of the designated column and returns a remapped value that will replace the original value in the dump file.

For example: Data move from production server to test server, in data there is one table named employee which contents all employee bank account number. for security we don't disclose this value to test server team, so through remap_function we can prevent this.

For this we need to create package which content one function we replace or change bankac value for the column.

Syntax:

REMAP_DATA=schema.table_name.column_name:schema.pkg.function

For example:

REMAP_DATA=hr.employee.bankac:hr.package.function

4. reuse_dumpfiles parameter

This is another useful parameter, suppose we daily export dumpfile with somename but suppose is samename file already exist then export terminated with the following message

ORA-39001: invalid argument value

ORA-39000: bad dump file specification

ORA-31641: unable to create dump file "c:\test.dmp"

ORA-27038: created file already exists

OSD-04010: option specified, file already exists

To prevent this error we can use reuse_dumpfiles=y which overwrite any existing file with samename.

Syntax:

REUSE_DUMPFILES=y

Default: N

For IMPDP utility

1. encryption_password parameter

If dumpfile exported with encryption_password parameter then for import we must use password for import otherwise import terminated with the following message.

ORA-39002: invalid operation

ORA-39174: Encryption password must be supplied.

NOTE: Password is case-sensitive.

2. partition_options parameter

This is another cool feature which i come across.

Syntax:

PARTITION_OPTIONS=[none, departition,merge]

Default: none / departition (as per condition)

If NONE is specified means no change during import, table created as they exist in source database.

If DEPARTITION specified means each partitions created as a separate table in source database.

If MERGE specified means all partitions created as a one simple table.

For example: when partition_options=DEPARTITION

1. Create One table with 4 partition

CREATE TABLE sales
( invoice_no NUMBER,
sale_year INT NOT NULL )
PARTITION BY RANGE (sale_year)
( PARTITION sales_q1 VALUES LESS THAN (1999),
PARTITION sales_q2 VALUES LESS THAN (2000),
PARTITION sales_q3 VALUES LESS THAN (2001),
PARTITION sales_q4 VALUES LESS THAN (2002));

2. check the partitions

SQL> select partition_name from user_tab_partitions where table_name='SALES';
PARTITION_NAME

------------------------------

SALES_Q1

SALES_Q2

SALES_Q3

SALES_Q4

3. expdp the table

host expdp scott/tiger dumpfile=sales.dmp directory=data tables=SALES

4. drop the table

SQL> drop table sales purge;
Table dropped.

5. Import the table with PARTITION_OPTIONS=DEPARTITION

SQL> host impdp scott/tiger dumpfile=p.dmp directory=data partition_options=DEPARTITION

This time 4 table created for each partition.

SQL> select * from tab where tname like 'S%';
TNAME TABTYPE CLUSTERID

------------------------------ ------- ----------SALES_SALES_Q1 TABLE

SALES_SALES_Q2 TABLE

SALES_SALES_Q3 TABLE

SALES_SALES_Q4 TABLE

check the partitions view

SQL> select partition_name from user_tab_partitions;

no rows selected

Again we import same exported with with PARTITION_OPTIONS=MERGE

SQL> host impdp scott/tiger dumpfile=sales.dmp directory=data partition_options=MERGE

This time only one table created for all partitions.

SQL> select * from tab where tname like 'S%';
TNAME TABTYPE CLUSTERID

------------------------------ ------- ----------SALES TABLE

Check the partitions view

SQL> select partition_name from user_tab_partitions where table_name='SALES';

no rows selected

1 comment:

chandu reddy said...

For more about DATA PUMP see here :

http://chandu208.blogspot.com/2011/04/oracle-data-pump.html

also

http://chandu208.blogspot.com/2011/09/data-pump-scenarios.html