About A Migration Using DataPump


Project Target:

  1. Source
  2.    Server: local-ip: 10.10.10.11,10.10.10.12
  3.             virt-ip: 10.10.10.13,10.10.10.14
  4.             scan-ip: 10.10.10.15
  5.        DB: db-rac01, db-rac02 Running Oracle GI, DB 11.2.0.4 on AIX 6.1L
  6.   Schemas: 'FJZKXT','FJZP','LOTTERY','FJYDIM','FJIHOME','SECKILL','FJYDWSC'
  7. Target
  8.     Sever: local-ip: 10.10.10.21,10.10.10.22
  9.             virt-ip: 10.10.10.23,10.10.10.24
  10.             scan-ip: 10.10.10.25
  11.        DB: db-rac03, db-rac04 Running Oracle GI, DB 11.2.0.4 on AIX 6.1L
  12.   Schemas: 'FJZKXT','FJZP','LOTTERY','FJYDIM','FJIHOME','SECKILL','FJYDWSC'

Planning:
      Of course, you should make a plan for the migration.
  You should take care of points below:
  1. a. DB directory, user with enough privillage to export the schemas
  2.        If not, create it or grant dba role to it!
  3. b. Database NLS_LANG should be same
  4. c. The Edition and help of DataPump(expdb, impdp)
  5. d. Check the data size and types of the schemas
  6.        Is there any static data could be migration when the applications are using DB.
  7.        If yes, migrat it first.
  8.        Is there enough time and network bandwidth to move the dump file to target server.
  9.        If not, maybe you need to think about manually building the Index on the target DB instead of export it.
  10.        Is there any objects are "INVALID" in the source DB.
  11.        If yes, make note.
  12. e. Try to export the data and import the data before you really need to do it.
  13.        Now, you might face a lot of problem, resolve them.
  14.        This time, you will be able to estimate the time it need to finish the Migration.
  15.        In the course of this test, you could find some option could be optimized, 4 example "PARALLEL".
  16. f. Prepare a database link from soruce DB to target DB.
  17.        You had better to create the db link in source DB because of the possibility of the listener of source DB \
  18.            would be diabled in the process of migration.
  19.        Compare the objects in source DB and target DB after migration.
  20. g. Something before finishing.

Executing:
  Create DB user, directory for migration:
  1. SQL> create user jackson identified by "jackson_mig" account unlock;   -- execute in both
  2. SQL> grant dba to jackson;                                             -- execute in both
  3. SQL> grant create session to jackson;                                  -- execute in both
  4. SQL> create directory EXPDUMP as '/dmpfiles';                          -- execute in Source DB
  5. SQL> create directory IMPDUMP as '/dmpfiles';                          -- execute in Target DB
  Share the dumpfiles on source server to target server (with NFS):
    @ Mount the fs on target server with root
  1. # mount -o rw,bg,hard,intr,proto=tcp,vers=3,rsize=65536,wsize=65536,timeo=600 10.10.10.11:/dmpfiles /dmpfiles;
    @ If you do not know how to use NFS, just ask your SA or ask TOM.
    @ You can alse use scp to cp the dumpfiles to target server, but it take more times.
  1. $ scp /dmpfiles/expdp_*.dmp oracle@10.10.10.21:/dmpfiles/
    @ Or just use tar (you can find on any OS, Oh except Windows)
  1. $ cd /dmpfiles; tar -cf - expdp_*.dmp | ssh 10.10.10.21 "tar -xv -C /dmpfiles";    -- for Unix and linux
  2.                 tar -cO expdp_*.dmp | ssh 10.10.10.21 "tar -xv -C /dmpfiles";      -- for Linux
  3.                 tar -cf - expdp_*.dmp | rsh 10.10.10.21 "tar -xv -C /dmpfiles";    -- for rsh
  Check NLS_LANG in both DB:
  1. SQL> select (select upper(value) from nls_database_parameters where parameter='NLS_LANGUAGE')
  2.              || '_' ||
  3.             (select upper(value) from nls_database_parameters where parameter='NLS_TERRITORY')
  4.              || '.' ||
  5.             (select upper(value) from nls_database_parameters where parameter='NLS_CHARACTERSET') NLS_LANG
  6.          from dual;
  7.     
  8.     NLS_LANG
  9.     -----------------------
  10.     AMERICAN_AMERICA.ZHS16GBK
  Manuals of DataPump:
  1. $ expdp help=yes;
  2. $ impdp help=yes;
    -- I recommend you to read it carefully before you execute it.

  Collect data information:
    @ Talk with application developer about the static data and down time.
    @ Talk with network administrator about the network bandwidth.
    @ collect information of schemas and data.
         Source Tablespaces Size:
  1. SQL> select tablespace_name, sum(bytes)/1024/1024/1024 as "TBS_Size (GB)"
  2.          from dba_data_files group by tablespace_name order by 2 desc;
         Source Segments Size:                 -- You can use extents for accurate size, but pay more resource to get it.
  1. SQL> select owner, sum(bytes)/1024/1024 as "DataSize (MB)"
  2.          from dba_segments where owner in ('FJZKXT','FJZP','LOTTERY','FJYDIM','FJIHOME','SECKILL','FJYDWSC')
  3.              group by owner order by 2 desc;
         Source Schemas account information:
  1. SQL> select username, account_status, default_tablespace, temporary_tablespace from dba_users
  2.          where username in ('FJZKXT','FJZP','LOTTERY','FJYDIM','FJIHOME','SECKILL','FJYDWSC');
         Source Objects status:
  1. SQL> select owner, object_name, object_type, status
  2.                 from dba_objects where owner in
  3.                   ('FJZKXT','FJZP','LOOTERY','FJYDIM','FJIHOME','SECKILL','FJYDWSC')
  4.                   and status = 'INVALID';
         Count Objects of Source DB:        -- You should do it again in Target DB after data being imported.
  1. SQL> select count(*) from dba_objects
  2.                 where owner in ('FJZKXT','FJZP','LOTTERY','FJYDIM','FJIHOME','SECKILL','FJYDWSC');
         Create Database Link for compare objects:
  1. SQL> create database link LK_TARGET connect to jackson identified by "jackson_mig"
  2.          using '(DESCRIPTION =
  3.                     (ADDRESS = (PROTOCOL = TCP)(HOST = 10.10.10.23)(PORT = 1521))
  4.                     (CONNECT_DATA =
  5.                         (SERVER = DEDICATED)
  6.                         (SERVICE_NAME = TARGET_DB)
  7.                     )
  8.                 )';
  9. SQL> select instance_name from sys.v$instance@LK_TARGET;
    @ Prepare tablespace in Target DB:
         Create the tablespaces before migration in Target DB:
  1. SQL> create tablespace TBS_NAME datafile '+DATA' size 30720M( according to the real size of Source DB);
         Create schemas in Target DB (if you want):            -- get ddl to create user in Source DB.
  1. SQL> set long 1000;
  2. SQL> set line 500;
  3. SQL> set pagesize 99;
  4. SQL> select dbms_metadata.get_ddl('USER',U.username)
  5.          from dba_users U where U.username
  6.              in ('FJZKXT','FJZP','LOTTERY','FJYDIM','FJIHOME','SECKILL','FJYDWSC');
  7. SQL> select * from dba_sys_privs
  8.          where grantee in ('FJZKXT','FJZP','LOTTERY','FJYDIM','FJIHOME','SECKILL','FJYDWSC');
  9. SQL> select * from dba_role_privs 
  10.          where grantee in ('FJZKXT','FJZP','LOTTERY','FJYDIM','FJIHOME','SECKILL','FJYDWSC');

  Create test shell script for export data and import data:
    @ Why shell script not execute expdp/impdp directly?
          1. Execute directly you will show others the password of jackson(dba) in command history.
          2. You can easily to edit the command in the script file.
          3. It will be easy to executed with "nohup" command for unstable network or time limited connection.
    @ Create the shell script and parfile for export data.
  1. $ cat > expdp_data.sh << EOF
  2. #!/bin/sh

  3. # file name: expdp_data.sh
  4. # execute directory: /dumpfiles
  5. # you'd better referrence to the user oracle's profile on Source server

  6. ORACLE_BASE=/app/oracle
  7. ORACLE_HOME=/app/oracle/product/11.2.0/db_1
  8. ORACLE_SID=source2          # sid of Source DB
  9. NLS_LANG=AMERICAN_AMERICA.ZHS16GBK
  10. PATH=$ORACLE_HOME/bin:/usr/sbin:/usr/bin:/etc:/usr/lbin:/usr/bin/X11:/usr/local/bin:$PATH
  11. export ORACLE_BASE ORACLE_HOME ORACLE_SID NLS_LANG LD_LIBRARY_PATH PATH

  12. echo "`date`"

  13. # when it is large than 2, the third file can not be created on the NFS.
  14. expdp jackson/oracle \
  15. SCHEMAS=('FJZKXT','FJZP','LOTTERY','FJYDIM','FJIHOME','SECKILL','FJYDWSC') \
  16. PARALLEL=2 \
  17. directory=EXPDUMP \
  18. dumpfile=expdp_data_%U.dmp \
  19. logfile=expdp_data_.log \
  20. parfile=/dmpfiles/exp_data.par

  21. echo "`date`"
  22. EOF
  23. ~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~- End Of File expdp_data.sh ~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-
  24. $ cat > exp_data.par << EOF
  25. # attach=expdp_data_20160521         # if you set this, it will be easy to interact with the expdp process. Or it will be
  26.                                      #     created automaticly by system. You can find it in the logfile or on the screen.
  27. # schemas=('USER1',...,'USERn')      # if you want to export more than one schemas.
  28. # parallel=2                         # define threads to export the same time.
  29. # dumpfile=expdp_data                #_%U.dmp  %U will be instead of 01..99 automaticly when files created
  30. # directory=Dir1,Dir2,Dir3           # if your files are more than 99, you need to set multi values here. Each dir will
  31.                                      #     appear files 01..99.
  32. # content=all                        # "all" is default value, "metadata_only" will not export data, "data_only" will
  33.                                      #     only export data.
  34. # compression                        # don't use it! Trust me! That will be too slow.
  35. # full                               # Y, if you want to export full database.
  36. # exclude=schema:\"=\'HR\'\"         # if you don't want the schema "HR", '\' is very important
  37. # reuse_dumpfiles=Y                  # if the dumpfile is exist, it will be overwrited.
  38. # parfile=exp_data.par               # this file is parfile, if the statment is too long, the parfile will be useful.
  39. EOF
  40. ~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~ End Of File exp_data.par ~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-
    @ Run the shell script.
  1. $ nohup sh exp_data.sh >> nohup_exp_data.log &
  2. $ tail -f nohup_exp_data.log                                      -- follow the log of exporting.
  3. $ expdp jackson/oracle attach=jackson.expdp_data_20160521      -- Now some one may see your password in command history.
  4. $ expdp attach=jackson.expdp_data_20160521
  5. username:
  6. password:                 -- It would be safe like this.
  7. Export> status            -- it'll show you the status of the job, you can try other command by referrencing "expdp help=yes".
  8.                           -- you can also interact with impdp like this.
    @ Create the shell script and parfile for import data.
  1. $ cat > impdp_data.sh << EOF
  2. #!/bin/sh

  3. # file name: impdp_data.sh
  4. # execute directory: /dumpfiles
  5. # you'd better referrence to the user oracle's profile on Target server

  6. ORACLE_BASE=/app/oracle
  7. ORACLE_HOME=/app/oracle/product/11.2.0/db_1
  8. ORACLE_SID=target2 -- sid of Target DB
  9. NLS_LANG=AMERICAN_AMERICA.ZHS16GBK
  10. PATH=$ORACLE_HOME/bin:/usr/sbin:/usr/bin:/etc:/usr/lbin:/usr/bin/X11:/usr/local/bin:$PATH
  11. export ORACLE_BASE ORACLE_HOME ORACLE_SID NLS_LANG LD_LIBRARY_PATH PATH

  12. echo "`date`"

  13. impdp jackson/oracle \
  14. PARALLEL=2 \
  15. directory=IMPDUMP \
  16. dumpfile=impdp_data_%U.dmp \
  17. logfile=impdp_data_.log \
  18. parfile=/dmpfiles/imp_data.par

  19. echo "`date`"
  20. EOF
  21. ~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~- End Of File impdp_data.sh ~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-
  22. $ cat > imp_data.par << EOF
  23. # attach=impdp_data_20160521                # if you set this, it will be easy to interact with the expdp process. Or it will
  24.                                             #     be created automaticly. You can find it in the logfile or on the screen.
  25. # schemas=('USER1',...,'USERn')             # if you don't want to import all schemas exported before.
  26. # dumpfile=expdp_data_%U.dmp                # %U will be instead of 01..99 automaticly when files created
  27. # directory=Dir1,Dir2,Dir3                  # according to the PATH in Source DB or the PATH you copy the dumpfiles to.
  28. # content=all                               # "all" is default value, "metadata_only" will not export data, "data_only" will
  29.                                             #     only export data.
  30. # remap_tablespace=tbs1:tbs2,tbs3:tbs4      #  if you want to change the tablespace of the objects in Target DB.
  31. # remap_schema                              # change the schemas of data in Target DB.
  32. # remap_table                               # change table name, I recommend you don't make it to complicated.
  33. # parfile=exp_data.par                      # this file is parfile, if the statment is too long, the parfile will be useful.
  34. EOF
  35. ~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~ End Of File imp_data.par ~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-
    @ Run the shell script.
  1. $ nohup sh imp_data.sh >> nohup_imp_data.log &
  2. $ tail -f nohup_imp_data.log                         -- follow the log of importing.

Comparing:
  Compare the logs of expdp and impdp:
    @ Make sure the shell script was executed succesfully.
  1. $ tail -n 5 /dumpfils/impdp_data.log
    @ Make sure that all tables was imported.
  1. $ grep rows /dumpfiles/expdp_data.log | wc -l
  2. $ grep rows /dumpfiles/impdp_data.log | wc -l

  Compare the data of the two DB:
    @ Compare objects
  1. SQL> select count(*) from dba_objects
  2.          where owner in ('FJZKXT','FJZP','LOTTERY','FJYDIM','FJIHOME','SECKILL','FJYDWSC');
    @ If the results is different, run below in Source DB( Some Index name and LOB name will change ).
  1. SQL> select owner,object_name,object_type
  2.          from dba_objects where owner in
  3.              ('FJZKXT','FJZP','LOTTERY','FJYDIM','FJIHOME','SECKILL','FJYDWSC')
  4.              and object_type not in ('INDEX','LOB')
  5.      minus
  6.      select owner,object_name,object_type
  7.          from dba_objects@lk_target where owner in
  8.              ('FJZKXT','FJZP','LOTTERY','FJYDIM','FJIHOME','SECKILL','FJYDWSC')
  9.              and object_type not in ('INDEX','LOB');
    @ Compare "INDEX" and "LOB" of which schemas are different, Source DB.
  1. SQL> select owner, count(*) from dba_objects
  2.          where owner in ('FJZKXT','FJZP','LOTTERY','FJYDIM','FJIHOME','SECKILL','FJYDWSC')
  3.              and object_type in ('INDEX','LOB') group by owner;
  4. SQL> select owner, count(*) from dba_objects@lk_target
  5.          where owner in ('FJZKXT','FJZP','LOTTERY','FJYDIM','FJIHOME','SECKILL','FJYDWSC')
  6.              and object_type in ('INDEX','LOB') group by owner;
    @ Find out the difference of "INDEX" and "LOB" for each schema, Source DB.
  1. SQL> select table_name, count(*) from dba_indexes where owner = 'SCHEMA1' group by table_name;
  2. SQL> select table_name, count(*) from dba_indexes@lk_target where owner = 'SCHEMA1' group by table_name;
  3. SQL> select table_name, count(*) from dba_lobs where owner = 'SCHEMA1' group by table_name;
  4. SQL> select table_name, count(*) from dba_lobs@lk_target where owner = 'SCHEMA1' group by table_name;
    @ Find out the indexes and columns of the schemas which are different, Source DB.     
  1. SQL> select owner, table_name, index_name, uniqueness from dba_indexes
  2.          where owner = 'SCHEMA1' and table_name = 'TABLE1';
  3. SQL> select owner, table_name, index_name, column_name from dba_ind_columns
  4.          where owner = 'SCHEMA1' and table_name = 'TABLE1';
  5. SQL> select owner, table_name, index_name, colunm_name from dba_lobs
  6.          where owner = 'SCHEMA1' and table_name = 'TABLE1';
    @ Recreate the index.
    @ Find out "INVALID" objects.
  1. SQL> select owner, object_name, object_type, status
  2.          from dba_objects where owner in
  3.              ('FJZKXT','FJZP','LOOTERY','FJYDIM','FJIHOME','SECKILL','FJYDWSC')
  4.              and status = 'INVALID';
    @ Recompoile the objects which should not be "INVALID".
  1. SQL> alter PACKAGE xxx compile;
  2. SQL> alter PACKAGE xxx compile body;
  3. SQL> alter PROCEDURE xxx compile;
  4. SQL> alter FUNCTION xxx compile;
  5. SQL> alter TRIGGER xxx compile;
  6. SQL> alter VIEW xxx compile;
  7. SQL> alter trigger xxx compile;

  8. SQL> @?/rdbms/admin/utlrp.sql;             -- Or, just do this with sys.

Some Clear:
    @ maybe you want to lock the user in the Source DB.
  1. SQL> drop database link lk_target;
  2. SQL> alter user FJZKXT account lock;
  3. SQL> alter user FJZP account lock;
  4. SQL> alter user LOOTERY account lock;
  5. SQL> alter user FJYDIM account lock;
  6. SQL> alter user FJIHOME account lock;
  7. SQL> alter user SECKILL account lock;
  8. SQL> alter user FJYDWSC account lock;
    @ For security lock or drop the user "JACKSON".
  1. SQL> alter user JACKSON identified by "a password even I don't know what it is." account lock;     -- in both DB
  2. SQL> drop user JACKSON;

End:

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/25521690/viewspace-2104719/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/25521690/viewspace-2104719/

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值