EXPDP
Export data with Oracle Data Pump.
Syntax:
EXPDP username/password@connect_string ATTACH [=[schema_name.]job_name] EXPDP username/password@connect_string FULL=Y options EXPDP username/password@connect_string SCHEMAS=schema_name [,schema_name2...] options EXPDP username/password@connect_string TABLES=[schema_name.] table_name [:partition_name] [,table_name2...] options EXPDP username/password@connect_string TABLESPACES=tablespace_name [,tablespace_name2...] options EXPDP username/password@connect_string TRANSPORT_TABLESPACES=tablespace_name [,tablespace_name2...] [TRANSPORT_FULL_CHECK={Y|N}] options Options CLUSTER {Y | N } COMPRESSION={METADATA_ONLY | NONE} CONTENT={ALL | DATA_ONLY | METADATA_ONLY} DATA_OPTIONS XML_CLOBS DIRECTORY=directory_object DUMPFILE=[directory_object:]file_name [,[directory_object:]file_name...] ESTIMATE={ BLOCKS | STATISTICS} ESTIMATE_ONLY={Y|N} EXCLUDE={object_type [:name_clause]} [,EXCLUDE=...] ENCRYPTION { ALL | DATA_ONLY | METADATA_ONLY } ENCRYPTION_PASSWORD=password ENCRYPTION_ALGORITHM { AES128 | AES192 | AES256 } ENCRYPTION_MODE { DUAL | PASSWORD | TRANSPARENT } FILESIZE=number_of_bytes FLASHBACK_SCN=scn_value FLASHBACK_TIME=timestamp INCLUDE={object_type [:name_clause]} [,INCLUDE=...] JOB_NAME=jobname_string LOGFILE=[directory_object:]file_name NOLOGFILE={Y|N} PARFILE=[directory_path]file_name NETWORK_LINK=database_link PARALLEL=int REMAP_DATA QUERY=[schema_name.][table_name:]query_clause REUSE_DUMPFILES Overwrite destination dump file if it exists (N) SAMPLE=[schema_name.][table_name:]sample_percent SERVICE_NAME SOURCE_EDITION STATUS [=int] TRANSPORTABLE { ALWAYS | NEVER } VERSION={COMPATIBLE | LATEST | version_string} Interactive mode options: ADD_FILE Add dumpfile to dumpfile set. ADD_FILE=dumpfile-name CONTINUE_CLIENT Return to logging mode. Job will be re-started if idle. EXIT_CLIENT Quit client session and leave job running. FILESIZE Default filesize (bytes) for subsequent ADD_FILE commands. HELP Summarize interactive commands. KILL_JOB Detach and delete job. PARALLEL Change the number of active workers for current job. PARALLEL=. REUSE_DUMPFILES Overwrite destination dump file if it exists (N) START_JOB Start/resume current job. STATUS Frequency (secs) job status is to be monitored where the default (0) will show new status when available. STATUS=[interval] STOP_JOB Orderly shutdown of job execution and exits the client. STOP_JOB=IMMEDIATE performs an immediate shutdown of the Data Pump job.
expdp is a server side utility used to unload database data into a set of OS files called a 'dump file set'. The dump file set can be imported only by the Data Pump Import utility impdb. The dump file set can be imported on the same system or it can be moved to another system and loaded there.
The dump file set is made up of one or more disk files that contain table data, database object metadata, and control information.
The directory objects must be created before running expdb.
The directory objects must be created before running expdb.
Example
CREATE OR REPLACE DIRECTORY export_dir AS '/u01/app/oracle/oradata/'; GRANT READ, WRITE ON DIRECTORY export_dir TO demouser; expdp system/password@sid full=Y directory=export_dir dumpfile=ss64.dmp logfile=exp_ss64.log
To export only a selection of schemas or tables, replace the full=Y in the above with schemas=USER1,USER2 or tables=TABLE1,TABLE2 etc