Oracle schema和tablespace级别的数据迁移

最近一段时间,跟着两个项目组,做ORACLE DB相关的支持工作,因为项目属于开发阶段,总有一些数据库的复制工作,比较了几种方法,感觉用EXPDP/IMPDP还不错,能顺利实现开发人员的需求。

    需求:实现user/schema级别的数据迁移。
    版本:Oracle Enterprise 11g R2

   总体来说分为以下几步:  

 1.查看原库和目标库的DUMP目录。

  1. SQL> select * from dba_directories;  
  2. SYS        SUBDIR                /opt/oracle/app/oracle/product/11.2.0/dbhome_1/demo/schema/order_entry/2002/Sep  
  3. SYS        SS_OE_XMLDIR          /opt/oracle/app/oracle/product/11.2.0/dbhome_1/demo/schema/order_entry/  
  4. SYS        LOG_FILE_DIR          /opt/oracle/app/oracle/product/11.2.0/dbhome_1/demo/schema/log/  
  5. SYS        DATA_FILE_DIR         /opt/oracle/app/oracle/product/11.2.0/dbhome_1/demo/schema/sales_history/  
  6. SYS        XMLDIR                /ade/b/2125410156/oracle/rdbms/xml  
  7. SYS        MEDIA_DIR             /opt/oracle/app/oracle/product/11.2.0/dbhome_1/demo/schema/product_media/  
  8. SYS        DATA_PUMP_DIR         /opt/oracle/app/oracle/admin/oracle/dpdump/  
  9. SYS        DATA_dUMP_DIR         /home/oracle/dpdump/  
  10. SYS        ORACLE_OCM_CONFIG_DIR /opt/oracle/app/oracle/product/11.2.0/dbhome_1/ccr/state  
  11. rows selected.  

上面 DATA_DUMP_DIR就是直接可以用的DUMP目录,导出文件和导入文件就放在上面的路径下,如果没有,则需要手动创建并赋给读/写权限。

  1. SQL>CONN sys /as sysdba  
  2. SQL>CREATE DIRECTORY DATA_PUMP_DIR AS '/home/oracle/dpdump/';  
  3. SQL>GRANT READ,WRITE ON DIRECTORY  DATA_DUMP_DIR TO TEST;  

2.对比原库和目标库的username和tablespace

如果目标库中没有需要导入的username和tablespace,则需要在目标库中创建username和tablespace。

[sql] view plain copy print ?
原库:
  1. SQL> select username,default_tablespace from dba_users; --查看原库中的用户和对应的默认表空间   
  2. SQL> create tablespace test_tbs datafile '/+data1/orcl/TEST_TBS01.DBF' size 1024m autoextend on next 1024M maxsize 10240M;  
  3. SQL> create user test identified by test default tablespace test_tbs;
  4. SQL>grant resource to test;
  5. SQL>create table test.test1 as select * from dba_data_files;
  6. SQL>create table test.test2 as select * from user_tables;
  7. SQL>create table test.test3 as select * from user_sys_prvis;
  8. SQL>create table test.test4 as select * from dba_tables; 

目标库:
 

  1. SQL> create tablespace test1_tbs datafile '/+data1/orcl/TEST1_TBS01.DBF' size 1024m autoextend on next 1024M maxsize 10240M;  
  2. SQL> create user test identified by test default tablespace test1_tbs;
  3. SQL> create user test1 identified by test default tablespace test1_tbs;

 

 


 3.导出脚本 

  1. --导出指定schema   
  2. expdp system/oracle@orcl DIRECTORY=DATA_DUMP_DIR DUMPFILE=test_20140324.DMP SCHEMAS=test logfile=test_expdp_20111014.log status=10 parallel=4 CONTENT=ALL COMPRESSION=ALL  
  3. --导出整个数据库   
  4. expdp system/oracle@orcl DIRECTORY=DATA_DUMP_DIR DUMPFILE=test_20140324.DMP SCHEMAS=test logfile=test_expdp_20111014.log status=10 parallel=1 CONTENT=ALL flashback_scn=18341888 COMPRESSION=ALL  
 

4. 导入脚本

  1. --需要从一个schema(test)导入到另一个schema (test1)   
  2. impdp system/oracle@TEST DIRECTORY=dump_dir DUMPFILE=test_20140324.DMP LOGFILE=impdp_test_20140324_2.log  REMAP_SCHEMA=test:test1 REMAP_TABLESPACE=TEST_TBS:TEST1_TBS CONTENT=ALL PARALLEL=4 STATUS=10 TABLE_EXISTS_ACTION=REPLACE   
  1. --不需要更名schema和tablespace 
  2. impdp system/oracle@TEST DIRECTORY=dump_dir DUMPFILE=test_20140324.DMP LOGFILE=impdp_test_20111014_1.log  SCHEMAS=test CONTENT=ALL PARALLEL=4 STATUS=10 TABLE_EXISTS_ACTION=REPLACE  
     

附:查看各个SCHEMA占用空间的大小:

原库: 

SQL> select owner,round(sum(bytes/1024/1024),2)||' MB' as schema_size from dba_segments where owner='test' group by owner order by SCHEMA_SIZE desc;

OWNER          SCHEMA_SIZE
------------------------------ -------------------------------------------
TONWUQ          9.5 MB

目标库

				

SQL> select owner,round(sum(bytes/1024/1024),2)||' MB' as schema_size from dba_segments where owner='test1' group by owner order by SCHEMA_SIZE desc;

OWNER          SCHEMA_SIZE ------------------------------ ------------------------------------------- TONWUQ          9.5 MB

 
--查询己经导入的对象个数
原库:
SQL> select owner,count(*) from dba_objects where owner=’TEST‘ group by owner order by owner;

				

OWNER     COUNT(*) ------------------------------ ---------- TEST     5 目标库:

SQL> select owner,count(*) from dba_objects where owner='TEST1' group by owner order by owner;

OWNER     COUNT(*) ------------------------------ ---------- test1     5

 ORA-39095: Dump file space has been exhausted: Unable to allocate 8192 bytes
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" stopped due to fatal error at 13:34:40

expdp时parallel不当也会引起ORA-39095
2008-09-24 15:01

在expdp做导出的时候会有碰到ora-39095的错误,引起这个错误的原因有两种。一一说来
先看官方的解释:
ORA-39095: Dump file space has been exhausted: Unable to allocate string bytes
Cause: The Export job ran out of dump file space before the job was completed.
Action: Reattach to the job and add additional dump files to the job restarting the job.
从字面意思就解释了第一种原因,那就是:空间不够了。解决方法也简单,多来点空间。
还有第二中原因:当使用了PARALLEL但是dumpfile却只有一个或小于parallel数,下面是官方的说明:

Oracle? Database Utilities
10g Release 2 (10.2)
2 Data Pump Export
PARALLEL=integer
The value you specify for integer should be less than, or equal to, the number of files in the dump file set (or you should specify substitution variables in the dump file specifications).
Because each active worker process or I/O server process writes exclusively to one file at a time, an insufficient number of files can have adverse effects. Some of the worker processes will be idle while waiting for files, thereby degrading the overall performance of the job.(第一个原因是影响性能)
More importantly, if any member of a cooperating group of parallel I/O server processes cannot obtain a file for output, then the export operation will be stopped with an ORA-39095 error. (ora-39095的成因)Both situations can be corrected by attaching to the job using the Data Pump Export utility, adding more files using the ADD_FILE command while in interactive mode, and in the case of a stopped job, restarting the job.
说白话点就是:parallel io server processes写文件不能同时写一个,如果只有一个dumpfile(或少于parallel)就会影响性能。不但如此,当一个io server process在等待从而不能写dumpfile的时候就会报ora-39095
要解决:expdp ATTACH 连上作业然后减少parallel或者增加dumpfile
从起源解决的话就是:指定parallel的时候就要指定至少同样多的dumpfile或者使用类似下面的命令(注意红字):
expdp hr/hr DIRECTORY=dpump_dir1 LOGFILE=parallel_export.log JOB_NAME=par4_job DUMPFILE=par_exp%u.dmp PARALLEL=4

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/28793776/viewspace-1472539/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/28793776/viewspace-1472539/

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值