Tuesday, January 13, 2009

Understanding the SCN

Understanding the SCN

In order to understand how Oracle performs recovery, it’s first necessary to understand Oracle’s SCN in terms of the various places where it can be stored and how it’s used for instance and media recovery.

The SCN is an internal number maintained by the database management system (DBMS) to log changes made to a database. The SCN increases over time as changes are made to the database by Structured Query Language (SQL). By understanding how the SCN is used, you can understand how Oracle recovery works. Oracle9i enables you to examine the current SCN using the following SQL:

SQL> select dbms_flashback.get_system_change_number from dual;

Whenever an application commits a transaction, the log writer process (LGWR) writes records from the redo log buffers in the System Global Area (SGA) to the online redo logs on disk. LGWR also writes the transaction’s SCN to the online redo log file. The success of this atomic write event determines whether your transaction succeeds, and it requires a synchronous (wait-until-completed) write to disk.


Note: The need for a synchronous write upon commit is one of the reasons why the online redo log can become a bottleneck for applications and why you should commit as infrequently as is practical. In general, Oracle writes asynchronously to the database datafiles for performance reasons, but commits require a synchronous write because they must be guaranteed at the time they occur.

SCN and Checkpoints:

A checkpoint occurs when all modified database buffers in the Oracle SGA are written out to datafiles by the database writer (DBWn) process. The checkpoint process (CKPT) updates all datafiles and control files with the SCN at the time of the checkpoint and signals DBWn to write out the blocks. A successful checkpoint guarantees that all database changes up to the checkpoint SCN have been recorded in the datafiles. As a result, only those changes made after the checkpoint need to be applied during recovery. Checkpoints occur automatically as follows:

  • Whenever a redo log switch takes place
  • Whenever the time set by the LOG_CHECKPOINT_TIMEOUT initialization parameter is reached
  • Whenever the amount of redo written reaches the number of bytes associated with the LOG_CHECKPOINT_INTERVAL

Typically, LOG_CHECKPOINT_INTERVAL is chosen so that checkpoints only occur on log switches. Oracle stores the SCN associated with the checkpoint in four places: three of them in the control file and one in the datafile header for each datafile.

The System Checkpoint SCN:

After a checkpoint completes, Oracle stores the system checkpoint SCN in the control file. You can access the checkpoint SCN using the following SQL:

SQL> select checkpoint_change# from v$database;


CHECKPOINT_CHANGE#
--------------------
292767

The Datafile Checkpoint SCN:

After a checkpoint completes, Oracle stores the SCN individually in the control file for each datafile. The following SQL shows the datafile checkpoint SCN for a single datafile in the control file:

SQL> select name,checkpoint_change# from v$datafile where name like '%users01%';

NAME CHECKPOINT_CHANGE#
----------------------------------- --------------------
/u02/oradata/OMFD1/users01.dbf 292767

The Start SCN:

Oracle stores the checkpoint SCN value in the header of each datafile. This is referred to as the start SCN because it is used at instance startup time to check if recovery is required. The following SQL shows the checkpoint SCN in the datafile header for a single datafile:

SQL> select name,checkpoint_change# from v$datafile_header where name like '%users01%';

NAME CHECKPOINT_CHANGE#
----------------------------------- --------------------
/u02/oradata/OMFD1/users01.dbf 292767

The Stop SCN:

The stop SCN is held in the control file for each datafile. The following SQL shows the stop SCN for a single datafile when the database is open for normal use:

SQL> select name,last_change# from v$datafile where name like '%users01%';

NAME LAST_CHANGE#
----------------------------------- ------------
/u02/oradata/OMFD1/users01.dbf

During normal database operation, the stop SCN is NULL for all datafiles that are online in read-write mode. SCN Values while the Database Is Up Following a checkpoint while the database is up and open for use, the system checkpoint in the control file, the datafile checkpoint SCN in the control file, and the start SCN in each datafile header all match. The stop SCN for each datafile in the control file is NULL. SCN after a Clean Shutdown After a clean database shutdown resulting from a SHUTDOWN IMMEDIATE or SHUTDOWN NORMAL of the database, followed by STARTUP MOUNT, the previous queries on v$database and v$datafile return the following:

SQL> select checkpoint_change# from v$database;

CHECKPOINT_CHANGE#
--------------------
293184

SQL> select name,checkpoint_change#,last_change# from v$datafile where name like '%user%';

NAME CHECKPOINT_CHANGE# LAST_CHANGE#
----------------------------------- -------------------- --------------
/u02/oradata/OMFD1/users01.dbf 293184 293184


SQL> select name,checkpoint_change# from v$datafile_header where name like '%users01%';

NAME CHECKPOINT_CHANGE#
----------------------------------- --------------------
/u02/oradata/OMFD1/users01.dbf 293184

During a clean shutdown, a checkpoint is performed and the stop SCN for each datafile is set to the start SCN from the datafile header. Upon startup, Oracle checks the start SCN in the file header with the datafile checkpoint SCN. If they match, Oracle checks the start SCN in the datafile header with the datafile stop SCN in the control file. If they match, the database can be opened because all block changes have been applied, no changes were lost on shutdown, and therefore no recovery is required on startup. After the database is opened, the datafile stop SCN in the control file once again changes to NULL to indicate that the datafile is open for normal use.

SCN after an Instance Crash

The previous example showed the behavior of the SCN after a clean shutdown. To demonstrate the behavior of the checkpoints after an instance crash, the following SQL creates a table (which performs an implicit commit) and inserts a row of data into it without a commit:
create table x(x number) tablespace users;

insert into x values(100);

If the instance is crashed by using SHUTDOWN ABORT, the previous queries on v$database and v$datafile return the following after the database is started up in mount mode:

SQL> select checkpoint_change# from v$database;

CHECKPOINT_CHANGE#
--------------------
293185

SQL> select name,checkpoint_change#,last_change# from v$datafile where name like '%users01%';

NAME CHECKPOINT_CHANGE# LAST_CHANGE#
----------------------------------- -------------------- --------------
/u02/oradata/OMFD1/users01.dbf 293185


SQL> select name,checkpoint_change# from v$datafile_header where name like '%users01%';

NAME CHECKPOINT_CHANGE#
----------------------------------- --------------------
/u02/oradata/OMFD1/users01.dbf 293185

In this case, the stop SCN is not set, which is indicated by the NULL value in the LAST_CHANGE# column. This information enables Oracle, at the time of the next startup, to determine that the instance crashed because the checkpoint on shutdown was not performed. If it had been performed, the LAST_CHANGE# and CHECKPOINT_CHANGE# values would match for each datafile as they did during a clean shutdown. If an instance crashes at shutdown, then instance crash recovery is required the next time the instance starts up.

Recovery from an Instance Crash

Upon the next instance startup that takes place after SHUTDOWN ABORT or a DBMS crash, the Oracle DBMS detects that the stop SCN for datafiles is not set in the control file during startup. Oracle then performs crash recovery. During crash recovery, Oracle applies redo log records from the online redo logs in a process referred to as roll forward to ensure that all transactions committed before the crash are applied to the datafiles. Following roll forward, active transactions that did not commit are identified from the rollback segments and are undone before the blocks involved in the active transactions can be accessed. This process is referred to as roll back. In our example, the following transaction was active but not committed at the time of the SHUTDOWN ABORT, so it needs to be rolled back:

SQL> insert into x values(100);

After instance startup, the X table exists, but remains empty. Instance recovery happens automatically at database startup without database administrator (DBA) intervention. It may take a while because of the need to apply large amounts of outstanding redo changes to data blocks for transactions that completed and those that didn’t complete and require roll back.

Recovery from a Media Failure

Up until this point, the checkpoint start SCN in the datafile header has always matched the datafile checkpoint SCN number held in the control file. This is reasonable because during a checkpoint, the datafile checkpoint SCN in the control file and the start SCN in the datafile header are both updated, along with the system checkpoint SCN. The following SQL shows the start SCN from the datafile header and datafile checkpoint SCN from the control file for the same file:

SQL> select 'controlfile' "SCN location",name,checkpoint_change# from v$datafile where name like '%users01%'
union
select 'file header',name,checkpoint_change# from v$datafile_header where name like '%users01%';

SCN location NAME CHECKPOINT_CHANGE#
-------------- ----------------------------------- --------------------
controlfile /u02/oradata/OMFD1/users01.dbf 293188
file header /u02/oradata/OMFD1/users01.dbf 293188

Unlike the v$datafile view, there is no stop SCN column in the v$datafile_header view because v$datafile_header is not used at instance startup time to indicate that an instance crash occurred. However, the v$datafile_header does provide the Oracle DBMS with the information it requires to perform media recovery. At instance startup, the datafile checkpoint SCN in the control file and the start SCN in the datafile header are checked for equality. If they don’t match, it is a signal that media recovery is
required.

For example, media recovery is required if a media failure has occurred and the original datafile has been replaced with a backup copy. In this case, the start SCN in the backup copy is less than the checkpoint SCN value in the control file, and Oracle requests archived redo logs—generated at the time of previous log switches—in order to reapply the changes required to bring the datafile up to the current point in time.

In order to recover the database from a media failure, you must run the database in ARCHIVELOG mode to ensure that all database changes from the online redo logs are stored permanently in archived redo log files. In order to enable ARCHIVELOG mode, you must run the command ALTERDATABASE ARCHIVELOG when the database is in a mounted state.

You can identify files that need recovery after you have replaced a datafile with an older version by starting the instance in mount mode and running the following SQL:

SQL> select file#,change# from v$recover_file;

FILE# CHANGE#
---------- ----------
4 313401

In this example, file 4 is the datafile in the USERS tablespace. By reexecuting the previous SQL to display the datafile checkpoint SCN in the control file and the start SCN in the datafile header, you can see that the start SCN is older due to the restore of the backup datafile that has taken place:

SQL> select 'controlfile' "SCN location",name,checkpoint_change#
from v$datafile where name like '%users01%'
union
select 'file header',name,checkpoint_change#
from v$datafile_header where name like '%users01%';

SCN location NAME CHECKPOINT_CHANGE#
-------------- ----------------------------------- --------------------
controlfile /u02/oradata/OMFD1/users01.dbf 313551
file header /u02/oradata/OMFD1/users01.dbf 313401
If you were to attempt to open the database, you would receive errors like the following:
ORA-01113: file 4 needs media recovery
ORA-01110: datafile 4: '/u02/oradata/OMFD1/users01.dbf'
You can recover the database by issuing RECOVER DATABASE from SQL*Plus while the database is in a mounted state. If the changes needed to recover the database to the point in time before the crash are in an archived redo log, then you will be prompted to accept the suggested name:
ORA-00279: change 313401 generated at 11/10/2001 18:50:23 needed for thread
ORA-00289: suggestion : /u02/oradata/OMFD1/arch/T0001S0000000072.ARC
ORA-00280: change 313401 for thread 1 is in sequence #72

Specify log: {=suggested | filename | AUTO | CANCEL}

If you respond to the prompt using AUTO, Oracle applies any archived redo logs it needs, followed by any necessary changes in the online redo logs, to bring the database right up to the last committed transaction before the media failure that caused the requirement for the restore.

So far, we’ve considered recovery scenarios where the goal is to recover the database to the most recent transaction. This is known as complete recovery. The RECOVER DATABASE command has several other options that enable you to recover from a backup to a point in time before the most recent transaction by rolling forward and then stopping the application of the redo log changes at a specified point. This is known as incomplete recovery. You can specify a time or an SCN as the recovery point. For example,

recover database until time '2001-11-10:18:52:00';
recover database until change 313459;

Before you perform incomplete recovery, it’s recommended that you restore a complete database backup first. After incomplete recovery, you must open the mounted database with ALTER DATABASE OPEN RESETLOGS. This creates a new incarnation of the database and clears the contents of the existing redo logs to make sure they can’t be applied.

Recovery from a Media Failure Using a Backup Control File

In the previous example, we had access to a current control file at the time of the media failure. This means that none of the start SCN values in the datafile headers exceeded the system checkpoint SCN number in the control file. To recap, the system checkpoint number is given by the following:

SQL> select checkpoint_change# from v$database;

You might be wondering why Oracle needs to maintain the last system checkpoint value in the control file as well as checkpoint SCNs in the control file for each datafile (as used in the previous example). There are two reasons for this. The first is that you might have read-only tablespaces in your database. In this case, the database checkpoint SCN increases, and the checkpoint SCN for the datafiles in the read-only tablespace remains frozen in the control file.

The following SQL report output shows a database with a read-write tablespace (USERS) and read-only tablespace (TEST). The start SCN in the file header and the checkpoint SCN in the control file for TEST are less than the system checkpoint value. Once a tablespace is read only, checkpoints have no effect on the files in it. The other read-write tablespace has checkpoint values that match the system checkpoint:

SCN location NAME CHECKPOINT_CHANGE#
-------------------- ---------------------------------- ----------------
controlfile SYSTEM checkpoint 355390
file header /u02/oradata/OD2/users01.dbf 355390
file in controlfile /u02/oradata/OD2/users01.dbf 355390
file header /u02/oradata/OD2/test01.dbf 355383
file in controlfile /u02/oradata/OD2/test01.dbf 355383

The second reason for the maintenance of multiple checkpoint SCNs in the control file is that you might not have a current control file available at recovery time. In this case, you need to restore an earlier control file before you can perform a recovery. The system checkpoint in the control file may indicate an earlier change than the start SCN in the datafile headers.

The following SQL shows an example where the system checkpoint SCN and datafile checkpoint SCN indicate an earlier change than the start SCN in the datafile header:

SQL> select 'controlfile' "SCN location",'SYSTEM checkpoint' name,checkpoint_change#
from v$database
union
select 'file in controlfile',name,checkpoint_change#
from v$datafile where name like 'users01%'
union
select 'file header',name,checkpoint_change#
from v$datafile_header where name like '%users01%';

SCN location NAME CHECKPOINT_CHANGE#
------------------- ------------------------------ ------------------
controlfile SYSTEM checkpoint 333765
file header /u02/oradata/OD2/users01.dbf 355253
file in controlfile /u02/oradata/OD2/users01.dbf 333765

If try you to recover a database in the usual way in this situation, Oracle detects that the control file is older than some of the datafiles, as indicated by the checkpoint SCN values in the datafile headers, and reports the following message:

SQL> recover database
ORA-00283: recovery session canceled due to errors
ORA-01610: recovery using the BACKUP CONTROLFILE option must be done

If you want to proceed with recovery in this situation, you need to indicate to Oracle that a noncurrent control file—possibly containing mismatches in the SCN values identified by the previous error messages—is about to be specified for recovery by using the following command:
recover database using BACKUP CONTROLFILE;


steps to create a database manually on Linux.

This article shows you steps to create a database manually on Linux. Step 1:

First create all the necessary directories. Followings are my directories:

testdb1]$ ls
admin backup archive
admin]$ ls
adump bdump cdump udump

Step 2:

Next prepare the database creation script. Following is my script "testdb1.sql"

CREATE DATABASE "testdb1"
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXDATAFILES 100
MAXINSTANCES 1
MAXLOGHISTORY 292
LOGFILE
GROUP 1 '/d02/monish/testdb1/redo1.log' SIZE 10M,
GROUP 2 '/d02/monish/testdb1/redo2.log' SIZE 10M,
GROUP 3 '/d02/monish/testdb1/redo3.log' SIZE 10M
DATAFILE
'/d02/monish/testdb1/system.dbf' size 100m,
'/d02/monish/testdb1/usr04.dbf' size 10m
sysaux datafile '/d02/monish/testdb1/sysaux.dbf' size 100m
undo tablespace undotbs
datafile '/d02/monish/testdb1/undo.dbf' size 50m
CHARACTER SET US7ASCII
;

Step 3:

Prepare the init file. Like this one [inittestdb1.ora]

*.audit_file_dest='/d02/monish/testdb1/admin/adump'
*.background_dump_dest='/d02/monish/testdb1/admin/bdump'
*.compatible='10.2.0.3.0'
*.control_files='/d02/monish/testdb1/control01.ctl',
'/d02/monish/testdb1/control02.ctl','/d02/monish/testdb1/control03.ctl'
*.core_dump_dest='/d02/monish/testdb1/admin/cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='testdb1'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=indiaXDB)'
*.job_queue_processes=10
*.log_archive_dest_1='LOCATION=/d02/monish/testdb1/archive'
*.log_archive_format='%t_%s_%r.dbf'
*.open_cursors=300
*.pga_aggregate_target=200278016
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=601882624
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS'
*.user_dump_dest='/d02/monish/testdb1/admin/udump'
*.db_recovery_file_dest='/d02/monish/testdb1/backup'
*.db_recovery_file_dest_size=2147483648

Step 4:

Now perform the following steps:

$ export ORACLE_SID=testdb1

$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.3.0 - Production on Thu May 22 17:35:28 2008
Copyright (c) 1982, 2006, Oracle. All Rights Reserved.
Connected to an idle instance.

SQL> startup pfile=/u01/app/oracle/product/10.2.0/db_1/dbs/inittestdb1.ora nomount

ORACLE instance started.
Total System Global Area 603979776 bytes
Fixed Size 1263176 bytes
Variable Size 167774648 bytes
Database Buffers 427819008 bytes
Redo Buffers 7122944 bytes

SQL> @testdb1.sql
Database created.

Step 5:

So your database is create. Now just run the catalog.sql and catproc.sql scripts.
You will find the in $ cd $ORACLE_HOME/rdbms/admin

SQL> @/u01/app/oracle/product/10.2.0/db_1/rdbms/admin/catalog.sql
SQL> @/u01/app/oracle/product/10.2.0/db_1/rdbms/admin/catproc.sql
SQL> select name from v$database;

NAME
---------
TESTDB1

Okay, now your database is ready to use.

Transportable Tablespaces

Transportable of Tablespace allow users to export the metadata of the tablespace instead of the data in the tablespace.
By doing so, the consistent backup of the tablespace concerned could be transfered to the desired location and straightaway plugged in by importing the meta data instead of importing the whole data.
This feature is very useful for datawarehousing/applications which needs data to be distributed to various places. Using this feature we can reduce the time required to export import the data.

RESTRICTIONS:
1. Operating system and processor type must be the same at source and target database
2. Database block size must be the same for the source and target database
3. Character set as well as national character set of the source and the target must be the same
4. Users whose default tablespace is getting exported must exist in the target database before importing
5. Target database must not have tablespace of the same name
6. Oracle versions in source and target must be 8.1.x and above
7. Snapshot/replication, function-based indexes,Scoped REFs, domain indexes, 8.0-compatible advanced queues with multiple recipients are not supported
8. The tablespace must be self contained to ensure that the table is complete in case of partitioning.

Setup for Implementation:
The required scripts for transportable tablespaces are run by catproc.sql itself. Check that DBMS_PLUGTS and DBMS_TTS are valid.
If not, run the following scripts $ORACLE_HOME/rdbms/admin/catplug.sql $ORACLE_HOME/rdbms/admin/dbmsplts.sql $ORACLE_HOME/rdbms/admin/prvtplts.plb as 'SYS' user.
If you are importing the tablespace as a user other than 'SYS', create a public synonym for SYS.DBMS_PLUGTS as DBMS_PLUGTS else you would get the errors similar to the following .
EXPORT THE TABLESPACE In this scenario Transporting TSTAB_01, TSIND_01 tablespace.

Step 1:
SQL> EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('TSTAB_01, TSIND _01 ',TRUE,TRUE);

The above statement can be used to determine whether TSTAB_01, TSIND_01 self-contained,with referential integrity constraints taken into consideration (indicated by TRUE) and following statement will show that is there are any violations: a foreign key constraint, dept_fk, across the tablespace set , and a partitioned table.

SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS; If there is any violation which must resolved.

Step 2:
Generate a Transportable Tablespace Set Any privileged user can perform this step. However, you must have been assigned the EXP_FULL_DATABASE role to perform a transportable tablespace export operation. After ensuring you have a self-contained set of tablespaces that you want to transport, generate a transportable tablespace set by performing the following actions:

1. Make all tablespaces in the set you are copying read-only.
2. SQL> ALTER TABLESPACE TSTAB_01 READ ONLY;
3. Tablespace altered.
4. SQL> ALTER TABLESPACE TSIND_01 READ ONLY;
5. Tablespace altered.

Step3 :
Export parameter file to export the tablespace vi exptrans.par userid=sys@DBNAME1 file=/oradata4/transtab.dmp log=/oradata4/ transtab.log TRANSPORT_TABLESPACE=y tablespaces=TSTAB_01,TSIND_01 rows=y CONSTRAINTS=Y GRANTS=Y TRIGGERS=Y statistics=none

Note: (AFTER COMPLETION OF EXPORT DONĂ¢€™T CHANGE THE STATUS OF THE TABLESPACE BECAUSE AFTER COMPLETION OF THE IMPORT THEN ONLY WE NEED TO CHANGE THE STATUS OF TABLESPACE OTHERWISE WE NEED TO TAKE THE FRESH EXPORT FOR TABLESPACE AGAIN)

Step4:
Transfer the meta data export dump file and the copy of the datafile(s) to the target database. (Could be ftp or copy or via cdrom or any other compatible media)

Step 5:
Import the metadata using TABLESPACES, TRANSPORT_TABLESPACE, DATAFILES clause If more than one datafiles are available then specify all of them comma seperated. userid=sys@DBNAME2 file=/oradata4/transtab.dmp log=/oradata4/transtab.imp TRANSPORT_TABLESPACE=y tablespaces=TSTAB_01,TSIND_01 datafiles=/oradata1/*.dbf(mention all the datafiles related to that particular tablespace) rows=y CONSTRAINTS=Y GRANTS=Y TRIGGERS=Y Ignore=y statistics=none

Step 6:
Make the tablespace read write at the target database
SQL>alter tablespace TSTAB_01 read write; SQL> alter tablespace TSIND_01 read write; USEFUL VIEWS

1. DBA_TABLESPACES and V$DATAFILE These contains a new field PLUGGED_IN which is helpful in identifying the plugged in tabelspaces. The value of PLUGGED_IN would be YES in dba_tablespaces and 1 in v$datafile if the tablespace/datafile was plugged in.

SQL> select tablespace_name, status, plugged_in from dba_tablespaces;

SQL> select name, status, plugged_in from v$datafile

2. PLUGGABLE_SET_CHECK This view is used by the export utility to check that the tablespace to be exported is complete and self-contained.

script to calculate free, used and total space and to display in MB

rem :
rem Filename: freespace.sql
rem Purpose: The following script will execute and display the free space faster and saves the user time.
rem Author: Muhammad
rem Date Submitted:21-Jul-2007
rem Date Published:23-Jul-2007
rem


SELECT s.tablespace_name TABLESPACE,
ROUND (((s.ublocks * 8) * 1024) / (1024 * 1024), 0) "USEDMB",
ROUND ((((f.fblocks * 8) * 1024) / (1024 * 1024)), 0) "FREEMB",
ublocks + fblocks "TOTAL BLOCKS",
ROUND (((((ublocks + fblocks) * 8) * 1024) / (1024 * 1024)),
0
) "TOTAL MB"
FROM (SELECT tablespace_name, SUM (blocks) ublocks
FROM dba_segments
GROUP BY tablespace_name) s,
(SELECT tablespace_name, SUM (blocks) fblocks
FROM dba_free_space
GROUP BY tablespace_name) f
WHERE s.tablespace_name = f.tablespace_name
ORDER BY TABLESPACE;

script to see the Reserved, Used and Free space of your database.

REM: Query to see the Reserved, Used and Free space of your database
REM:
REM:*****************************************
REM: Author: Seju
REM: Date Submitted: 3-Nov-2007
REM:
REM:
REM: NOTE: PLEASE TEST THIS SCRIPT BEFORE USE.
REM: Author will not be responsible for any damage that may be cause by this script.
REM:*****************************************

SELECT r.tablespace_name, reserved_space "RESERVED_SPACE(MB)",
reserved_space - free_space "USED_SPACE(MB)",
free_space "FREE_SPACE(MB)"
FROM (SELECT tablespace_name, SUM (BYTES)
/ (1024 * 1024) reserved_space
FROM dba_data_files
GROUP BY tablespace_name) r,
(SELECT tablespace_name, SUM (BYTES) / (1024 * 1024) free_space
FROM dba_free_space
GROUP BY tablespace_name) f
WHERE r.tablespace_name = f.tablespace_name
ORDER BY r.tablespace_name;

Oracle Data Pump in Oracle Database 10g


Oracle Data Pump is a newer, faster and more flexible alternative to the "exp" and "imp" utilities used in previous Oracle versions. In addition to basic import and export functionality data pump provides a PL/SQL API and support for external tables.

Getting Started

For the examples to work we must first unlock the SCOTT account and create a directory object it can access:
CONN sys/password@db10g AS SYSDBA
ALTER USER scott IDENTIFIED BY tiger ACCOUNT UNLOCK;
GRANT CREATE ANY DIRECTORY TO scott;

CREATE OR REPLACE DIRECTORY test_dir AS '/u01/app/oracle/oradata/';
GRANT READ, WRITE ON DIRECTORY test_dir TO scott;

Table Exports/Imports

The TABLES parameter is used to specify the tables that are to be exported. The following is an example of the table export and import syntax:
expdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=expdpEMP_DEPT.log

impdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=impdpEMP_DEPT.log
For example output files see expdpEMP_DEPT.log and impdpEMP_DEPT.log.

The TABLE_EXISTS_ACTION=APPEND parameter allows data to be imported into existing tables.

Schema Exports/Imports

The OWNER parameter of exp has been replaced by the SCHEMAS parameter which is used to specify the schemas to be exported. The following is an example of the schema export and import syntax:
expdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log

impdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp logfile=impdpSCOTT.log
For example output files see expdpSCOTT.log and impdpSCOTT.log.

Database Exports/Imports

The FULL parameter indicates that a complete database export is required. The following is an example of the full database export and import syntax:
expdp system/password@db10g full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfile=expdpDB10G.log

impdp system/password@db10g full=Y directory=TEST_DIR dumpfile=DB10G.dmp logfile=impdpDB10G.log
For an example output file see expdpDB10G.log.

Miscellaneous Information

Unlike the original exp and imp utilities all data pump ".dmp" and ".log" files are created on the Oracle server, not the client machine.

All data pump actions are performed by multiple jobs (server processes not DBMS_JOB jobs). These jobs are controlled by a master control process which uses Advanced Queuing. At runtime an advanced queue table, named after the job name, is created and used by the master control process. The table is dropped on completion of the data pump job. The job and the advanced queue can be named using the JOB_NAME parameter. Cancelling the client process does not stop the associated data pump job. Issuing "ctrl+c" on the client during a job stops the client output and presents a command prompt. Typing "status" at this prompt allows you to monitor the current job:
Export> status

Job: SYS_EXPORT_FULL_01
Operation: EXPORT
Mode: FULL
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: D:TEMPDB10G.DMP
bytes written: 4,096

Worker 1 Status:
State: EXECUTING
Object Schema: SYSMAN
Object Name: MGMT_CONTAINER_CRED_ARRAY
Object Type: DATABASE_EXPORT/SCHEMA/TYPE/TYPE_SPEC
Completed Objects: 261
Total Objects: 261
Data pump performance can be improved by using the PARALLEL parameter. This should be used in conjunction with the "%U" wildcard in the DUMPFILE parameter to allow multiple dumpfiles to be created or read:
expdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR parallel=4 dumpfile=SCOTT_%U.dmp logfile=expdpSCOTT.log
The DBA_DATAPUMP_JOBS view can be used to monitor the current jobs:
system@db10g> select * from dba_datapump_jobs;

OWNER_NAME JOB_NAME OPERATION
------------------------------ ------------------------------ ------------------------------
JOB_MODE STATE DEGREE ATTACHED_SESSIONS
------------------------------ ------------------------------ ---------- -----------------
SYSTEM SYS_EXPORT_FULL_01 EXPORT
FULL EXECUTING 1 1
The INCLUDE and EXCLUDE parameters can be used to limit the export/import to specific objects. When the INCLUDE parameter is used, only those objects specified by it will be included in the export. When the EXCLUDE parameter is used all objects except those specified by it will be included in the export:
expdp scott/tiger@db10g schemas=SCOTT include=TABLE:"IN ('EMP', 'DEPT')" directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log

expdp scott/tiger@db10g schemas=SCOTT exclude=TABLE:"= 'BONUS'" directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log

Data Pump API

Along with the data pump utilities Oracle provide an PL/SQL API. The following is an example of how this API can be used to perform a schema export:
SET SERVEROUTPUT ON SIZE 1000000
DECLARE
l_dp_handle NUMBER;
l_last_job_state VARCHAR2(30) := 'UNDEFINED';
l_job_state VARCHAR2(30) := 'UNDEFINED';
l_sts KU$_STATUS;
BEGIN
l_dp_handle := DBMS_DATAPUMP.open(
operation => 'EXPORT',
job_mode => 'SCHEMA',
remote_link => NULL,
job_name => 'EMP_EXPORT',
version => 'LATEST');

DBMS_DATAPUMP.add_file(
handle => l_dp_handle,
filename => 'SCOTT.dmp',
directory => 'TEST_DIR');

DBMS_DATAPUMP.add_file(
handle => l_dp_handle,
filename => 'SCOTT.log',
directory => 'TEST_DIR',
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);

DBMS_DATAPUMP.metadata_filter(
handle => l_dp_handle,
name => 'SCHEMA_EXPR',
value => '= ''SCOTT''');

DBMS_DATAPUMP.start_job(l_dp_handle);

DBMS_DATAPUMP.detach(l_dp_handle);
END;
/
Once the job has started the status can be checked using:
system@db10g> select * from dba_datapump_jobs;

External Tables

Oracle have incorporated support for data pump technology into external tables. The ORACLE_DATAPUMP access driver can be used to unload data to data pump export files and subsequently reload it. The unload of data occurs when the external table is created using the "AS" clause:
CREATE TABLE emp_xt
ORGANIZATION EXTERNAL
(
TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY test_dir
LOCATION ('emp_xt.dmp')
)
AS SELECT * FROM emp;
The data can then be queried using:
SELECT * FROM emp_xt;
The syntax to create the external table pointing to an existing file is similar, but without the "AS" clause:
DROP TABLE emp_xt;

CREATE TABLE emp_xt (
EMPNO NUMBER(4),
ENAME VARCHAR2(10),
JOB VARCHAR2(9),
MGR NUMBER(4),
HIREDATE DATE,
SAL NUMBER(7,2),
COMM NUMBER(7,2),
DEPTNO NUMBER(2))
ORGANIZATION EXTERNAL (
TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY test_dir
LOCATION ('emp_xt.dmp')
);

SELECT * FROM emp_xt;

Help

The HELP=Y option displays the available parameters:
expdp help=y

Export: Release 10.1.0.2.0 - Production on Tuesday, 23 March, 2004 8:33

Copyright (c) 2003, Oracle. All rights reserved.


The Data Pump export utility provides a mechanism for transferring data objects
between Oracle databases. The utility is invoked with the following command:

Example: expdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

You can control how Export runs by entering the 'expdp' command followed
by various parameters. To specify parameters, you use keywords:

Format: expdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
Example: expdp scott/tiger DUMPFILE=scott.dmp DIRECTORY=dmpdir SCHEMAS=scott
or TABLES=(T1:P1,T1:P2), if T1 is partitioned table

USERID must be the first parameter on the command line.

Keyword Description (Default)
------------------------------------------------------------------------------
ATTACH Attach to existing job, e.g. ATTACH [=job name].
CONTENT Specifies data to unload where the valid keywords are:
(ALL), DATA_ONLY, and METADATA_ONLY.
DIRECTORY Directory object to be used for dumpfiles and logfiles.
DUMPFILE List of destination dump files (expdat.dmp),
e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ESTIMATE Calculate job estimates where the valid keywords are:
(BLOCKS) and STATISTICS.
ESTIMATE_ONLY Calculate job estimates without performing the export.
EXCLUDE Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
FILESIZE Specify the size of each dumpfile in units of bytes.
FLASHBACK_SCN SCN used to set session snapshot back to.
FLASHBACK_TIME Time used to get the SCN closest to the specified time.
FULL Export entire database (N).
HELP Display Help messages (N).
INCLUDE Include specific object types, e.g. INCLUDE=TABLE_DATA.
JOB_NAME Name of export job to create.
LOGFILE Log file name (export.log).
NETWORK_LINK Name of remote database link to the source system.
NOLOGFILE Do not write logfile (N).
PARALLEL Change the number of active workers for current job.
PARFILE Specify parameter file.
QUERY Predicate clause used to export a subset of a table.
SCHEMAS List of schemas to export (login schema).
STATUS Frequency (secs) job status is to be monitored where
the default (0) will show new status when available.
TABLES Identifies a list of tables to export - one schema only.
TABLESPACES Identifies a list of tablespaces to export.
TRANSPORT_FULL_CHECK Verify storage segments of all tables (N).
TRANSPORT_TABLESPACES List of tablespaces from which metadata will be unloaded.
VERSION Version of objects to export where valid keywords are:
(COMPATIBLE), LATEST, or any valid database version.

The following commands are valid while in interactive mode.
Note: abbreviations are allowed

Command Description
------------------------------------------------------------------------------
ADD_FILE Add dumpfile to dumpfile set.
ADD_FILE=dumpfile-name
CONTINUE_CLIENT Return to logging mode. Job will be re-started if idle.
EXIT_CLIENT Quit client session and leave job running.
HELP Summarize interactive commands.
KILL_JOB Detach and delete job.
PARALLEL Change the number of active workers for current job.
PARALLEL=.
START_JOB Start/resume current job.
STATUS Frequency (secs) job status is to be monitored where
the default (0) will show new status when available.
STATUS=[interval]
STOP_JOB Orderly shutdown of job execution and exits the client.
STOP_JOB=IMMEDIATE performs an immediate shutdown of the
Data Pump job.



impdp help=y

Import: Release 10.1.0.2.0 - Production on Saturday, 11 September, 2004 17:22

Copyright (c) 2003, Oracle. All rights reserved.


The Data Pump Import utility provides a mechanism for transferring data objects
between Oracle databases. The utility is invoked with the following command:

Example: impdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

You can control how Import runs by entering the 'impdp' command followed
by various parameters. To specify parameters, you use keywords:

Format: impdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
Example: impdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

USERID must be the first parameter on the command line.

Keyword Description (Default)
------------------------------------------------------------------------------
ATTACH Attach to existing job, e.g. ATTACH [=job name].
CONTENT Specifies data to load where the valid keywords are:
(ALL), DATA_ONLY, and METADATA_ONLY.
DIRECTORY Directory object to be used for dump, log, and sql files.
DUMPFILE List of dumpfiles to import from (expdat.dmp),
e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ESTIMATE Calculate job estimates where the valid keywords are:
(BLOCKS) and STATISTICS.
EXCLUDE Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
FLASHBACK_SCN SCN used to set session snapshot back to.
FLASHBACK_TIME Time used to get the SCN closest to the specified time.
FULL Import everything from source (Y).
HELP Display help messages (N).
INCLUDE Include specific object types, e.g. INCLUDE=TABLE_DATA.
JOB_NAME Name of import job to create.
LOGFILE Log file name (import.log).
NETWORK_LINK Name of remote database link to the source system.
NOLOGFILE Do not write logfile.
PARALLEL Change the number of active workers for current job.
PARFILE Specify parameter file.
QUERY Predicate clause used to import a subset of a table.
REMAP_DATAFILE Redefine datafile references in all DDL statements.
REMAP_SCHEMA Objects from one schema are loaded into another schema.
REMAP_TABLESPACE Tablespace object are remapped to another tablespace.
REUSE_DATAFILES Tablespace will be initialized if it already exists (N).
SCHEMAS List of schemas to import.
SKIP_UNUSABLE_INDEXES Skip indexes that were set to the Index Unusable state.
SQLFILE Write all the SQL DDL to a specified file.
STATUS Frequency (secs) job status is to be monitored where
the default (0) will show new status when available.
STREAMS_CONFIGURATION Enable the loading of Streams metadata
TABLE_EXISTS_ACTION Action to take if imported object already exists.
Valid keywords: (SKIP), APPEND, REPLACE and TRUNCATE.
TABLES Identifies a list of tables to import.
TABLESPACES Identifies a list of tablespaces to import.
TRANSFORM Metadata transform to apply (Y/N) to specific objects.
Valid transform keywords: SEGMENT_ATTRIBUTES and STORAGE.
ex. TRANSFORM=SEGMENT_ATTRIBUTES:N:TABLE.
TRANSPORT_DATAFILES List of datafiles to be imported by transportable mode.
TRANSPORT_FULL_CHECK Verify storage segments of all tables (N).
TRANSPORT_TABLESPACES List of tablespaces from which metadata will be loaded.
Only valid in NETWORK_LINK mode import operations.
VERSION Version of objects to export where valid keywords are:
(COMPATIBLE), LATEST, or any valid database version.
Only valid for NETWORK_LINK and SQLFILE.

The following commands are valid while in interactive mode.
Note: abbreviations are allowed

Command Description (Default)
------------------------------------------------------------------------------
CONTINUE_CLIENT Return to logging mode. Job will be re-started if idle.
EXIT_CLIENT Quit client session and leave job running.
HELP Summarize interactive commands.
KILL_JOB Detach and delete job.
PARALLEL Change the number of active workers for current job.
PARALLEL=.
START_JOB Start/resume current job.
START_JOB=SKIP_CURRENT will start the job after skipping
any action which was in progress when job was stopped.
STATUS Frequency (secs) job status is to be monitored where
the default (0) will show new status when available.
STATUS=[interval]
STOP_JOB Orderly shutdown of job execution and exits the client.
STOP_JOB=IMMEDIATE performs an immediate shutdown of the
Data Pump job.
For more information see:

Tablespace Management Enhancements in Oracle Database 10g


Oracle 10g includes several small but neat enhancements related to tablespace management including:

Default Permanent Tablespace

Oracle9i introduced the concept of a default temporary tablespace to prevent people accidentally using the SYSTEM tablespace for temporary segments. Oracle 10g takes this further by including a default permanent tablespace to prevent users having their default tablespace set to SYSTEM. The DEFAULT TABLESPACE clause in the CREATE DATABASE statement allows the the default tablespace to be created and named. If this parameter is not set during creation, or needs to be changed subsequently, it can be set using the following command.
ALTER DATABASE DEFAULT TABLESPACE users;
The current settings for the default tablespaces can be viewed using the following query.
COLUMN property_name FORMAT A30
COLUMN property_value FORMAT A30
COLUMN description FORMAT A50
SET LINESIZE 200

SELECT *
FROM database_properties
WHERE property_name like '%TABLESPACE';

PROPERTY_NAME PROPERTY_VALUE DESCRIPTION
------------------------------ ------------------------------ --------------------------------------
DEFAULT_TEMP_TABLESPACE TEMP Name of default temporary tablespace
DEFAULT_PERMANENT_TABLESPACE USERS Name of default permanent tablespace

Rename Tablespace

Renaming permanent and temporary tablespaces is now allowed in Oracle 10g (except for SYSTEM and SYSAUX) using the following command.
ALTER TABLESPACE ts_current_name RENAME TO ts_new_name;
The tablespace and all it's datafiles must be online and the database must have a COMPATIBLE setting of 10.0.0 or greater.

If the tablespace is read-only the datafile headers are not altered to reflect the name change and a message is written to the alert log to notify you of this fact. The impact on recovery is that the tablespace will be recovered to it's old name if the controlfile is recreated and datafiles containing the old headers are used.

If an undo tablespace is renamed in a instance which uses a pfile, rather than an spfile, a message is written to the alert log reminding you to change the value of the UNDO_TABLESPACE parameter.

SYSAUX Tablespace

The SYSAUX tablespace provides a single location for all non-essential database metadata. In the past the schema objects to support many database features were located in the SYSTEM tablespace. These have now been moved to the SYSAUX tablespace. As a result the SYSTEM tablespace is less cluttered and suffers less contention. In addition, the total number of tablespaces to support database features has been reduced.

The registered occupants of the SYSAUX tablespace are listed in the V$SYSAUX_OCCUPANTS view. The view includes a MOVE_PROCEDURE column which specifies the procedure name which can be used to move the components for that occupant to another tablespace. This is useful if the schema associated with one occupant grows to the point where it would benefit from it's own tablespace.

The components which take up the largest amount of space in the SYSAUX tablespace are typically the Automatic Workload Repository (AWR) and the Enterprise Manager (EM) repository. Other components, such as Oracle UltraSearch, Oracle Text and Oracle Streams, will have no significant impact on the total tablespace size unless they are used heavily.

Multiple Temporary Tablespaces

Tablespace groups allow users to use more than one tablespace to store temporary segments. The tablespace group is created implicitly when the first tablespace is assigned to it:
-- Create group by adding existing tablespace.
ALTER TABLESPACE temp TABLESPACE GROUP temp_ts_group;

-- Add a new tablespace to the group.
CREATE TEMPORARY TABLESPACE temp2
TEMPFILE '/u01/app/oracle/oradata/DB10G/temp201.dbf' SIZE 20M
TABLESPACE GROUP temp_ts_group;
The tablespaces assigned to a group can be viewed using:
SELECT * FROM dba_tablespace_groups;

GROUP_NAME TABLESPACE_NAME
------------------------------ ------------------------------
TEMP_TS_GROUP TEMP
TEMP_TS_GROUP TEMP2

2 rows selected.
Once the group is created it can be assigned just like a tablespace to a user or as the default temporary tablespace:
-- Assign group as the temporary tablespace for a user. 
ALTER USER scott TEMPORARY TABLESPACE temp_ts_group;

-- Assign group as the default temporary tablespace.
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp_ts_group;
A tablespace can be removed from a group using:
ALTER TABLESPACE temp2 TABLESPACE GROUP '';

SELECT * FROM dba_tablespace_groups;

GROUP_NAME TABLESPACE_NAME
------------------------------ ------------------------------
TEMP_TS_GROUP TEMP

1 row selected.
There is no theoretical maximum limit to the number of tablespaces in a tablespace group, but it must contain at least one. The group is implicitly dropped when the last member is removed. The last member of a group cannot be removed if the group is still assigned as the default temporary tablespace. In this example the following must be done to remove the last member from the group.
-- Switch from the group to a specific tablespace.
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp;

-- Remove the tablespace from the group.
ALTER TABLESPACE temp TABLESPACE GROUP '';

-- Check the group has gone.
SELECT * FROM dba_tablespace_groups;

no rows selected
Tablespace groups share the same namespace as tablespaces so a group and tablespace cannot share the same name.