数据库间数据迁移方案

数据库间数据迁移方案

前言

本章节讨论的是不依托第三方开发语言,不通过业务逻辑的方式进行迁移,主要为运维提供的迁移方案。一般应用于历史数据的整体迁移,就为了这一次迁移写一堆的固定代码,不是很优雅的处理方式!

MySQL To MySQL

核心:mysqldump 对源库表数据进行转存储,source命令创建临时源库表数据(通过一个Shell脚本说明,大家把以下具体配置项改成自己的,表结构改成自己的,一键执行即可,其他迁移方案也类似)

#!/bin/bash 
echo "Start move Data."

#使用mysqldump命令转存储源数据表结构及其数据为临时脚本文件
mysqldump -h192.168.137.129 -P3307 -uroot -proot parauser --tables t_ssh_accounts t_user > ./parauser.sql;
mysqldump -h192.168.137.129 -P3307 -uroot -proot ngbilling --tables b_user b_group b_cluster b_cluster_user  > ./ngbilling.sql;

#连接目标数据库并执行迁移脚本,完整脚本""包裹
mysql -h192.168.137.129 -P3306 -uroot -proot -e"
#切换到目标库
use console;
#设置编码,注意跟源库表编码保持一致
set names utf8;
#执行源库表脚本文件,创建源库表及其数据,临时,迁移完数据后进行drop
source ./parauser.sql;
source ./ngbilling.sql;
#执行迁移脚本,根据自身业务需要来
INSERT IGNORE INTO user_info (id,name,email,phone,real_email,group_id,user_type)
    	SELECT *
    	FROM (
    			SELECT id as user_id,name as username,email,phone as mphone,real_email AS para_email,group_id,user_type FROM user_info
    			UNION ALL
    			SELECT a.user_id,a.username,a.email,a.mphone,a.para_email,a.group_id,
    				IF(b.origin_channel = 'bscc' OR b.origin_channel = 'bscc-cstcloud' OR b.origin_channel = 'bscc-carsi',2,1) as user_type
    				FROM b_user a LEFT JOIN t_user b ON a.user_id=b.user_id
    	) tb
    	GROUP BY user_id
    	HAVING count(*) = 1
    	ORDER BY user_id;

INSERT IGNORE INTO account_group (id,pay_user_id,master_user_id,name)
	SELECT *
	FROM (
			SELECT id as group_id,pay_user_id as pay_user_id, master_user_id as master_user_id,name as group_name FROM account_group
			UNION ALL
			SELECT group_id,pay_user_id, master_user_id,group_name FROM b_group
	) tb
	GROUP BY group_id
	HAVING count(*) = 1
	ORDER BY group_id;

INSERT IGNORE INTO cluster (id,name,online,owned,res_type)
	SELECT *
	FROM (
			SELECT id as name,name as alias,online as is_online,owned as is_proxy,res_type as resource_type FROM cluster
			UNION ALL
			SELECT name,alias,is_online,is_proxy,resource_type FROM b_cluster where is_name_valid = 1
	) tb
	GROUP BY name
	HAVING count(*) = 1
	ORDER BY name;

DELETE a FROM cluster_user a
		LEFT JOIN b_cluster_user b ON a.cluster_id = b.cluster
		AND a.user = b.username WHERE b.is_bound = 0;
INSERT IGNORE INTO cluster_user ( cluster_id, user,pay_user_id ) SELECT
       distinct
       cluster,
       username,
       user_id
       FROM b_cluster_user
       WHERE NOT EXISTS ( SELECT cluster_id, user FROM cluster_user WHERE cluster_id = cluster AND user = username ) AND is_bound=1 ORDER BY id desc;

DELETE a FROM cluster_usage a
		LEFT JOIN t_ssh_accounts b ON a.cluster_id = b.cluster_code
		AND a.user_id = b.user_id
	WHERE b.cluster_code  is null AND b.user_id is null;
INSERT IGNORE INTO cluster_usage ( cluster_id, user_id, cluster_user ) SELECT
	cluster_code,
	user_id,
	cluster_login_name
	FROM
		t_ssh_accounts
	WHERE
		NOT EXISTS ( SELECT cluster_id, user_id FROM cluster_usage WHERE cluster_id = cluster_code AND user_id = user_id );
#删除源库表,
DROP TABLES t_ssh_accounts,t_user,b_user,b_group,b_cluster,b_cluster_user;
commit;"

#删除mysqldump临时转存储脚本文件
rm ./ngbilling.sql ./parauser.sql -f;

echo "Check Data Successful." 
exit; 

MongoDB To MySQL

核心:通过mongoexport命令导出数据,通过load data infile导入数据(临时表与mongodb数据结构一致)

#1、使用mongoexport命令或者其他工具导出userPref的数据,格式为.csv
mongoexport --port 27001 -u admin -p 123456 --authenticationDatabase=admin -d userpref -c userpref -f userId,userPref.billingShowMoney,userPref.billingShowCoreTime --type=csv -o /usr/local/userpref/userpref.csv(导出到你需要的位置)

#2、因为MySQL默认是不允许Load File的,避免修改mysql配置,需要将csv文件放入指定的secure_file_priv文件夹中,先查出路径
show global variables like 'local_infile';#查看是否开启本地导入
show global variables like 'secure_file_priv';#查看本地导入保护文件夹

#3、然后将第一步导出的文件放入secure_file_priv对应的文件夹路径,并修改其操作权限
chmod 777 userpref.csv

#4、将导出的csv文件里的true和false改为大写,然后将TRUE替换为1, FALSE替换为0。 

#5、在console库创建一个表userpref,字段与mongo导出数据保持一致
CREATE TABLE `console`.`userpref`  (
  `userId` varchar(100) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL,
  `userpref.billingShowMoney` int(1) NULL DEFAULT NULL,
  `userpref.billingShowCoreTime` int(1) NULL DEFAULT NULL
) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci ROW_FORMAT = Dynamic;

#6、将mongodb导出的数据导入userpref表
LOAD DATA LOCAL INFILE 'secure_file_priv所在的文件夹路径/*.csv' REPLACE INTO TABLE userpref FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' ESCAPED BY '"' LINES TERMINATED BY '\n' ignore 1 lines;


#7、console库执行如下SQL脚本,更新用户偏好
UPDATE user_info a INNER JOIN userpref b ON a.id = b.userId 
SET a.user_pref = (
	CASE WHEN b.`userPref.billingShowCoreTime` = 1 AND b.`userPref.billingShowMoney` = 0 			THEN
				2 
			ELSE 
				1 
		END 
	);

MySQL To Redis

核心:利用redis管道模式将MySQL数据刷入Redis

#1、MySQL查询结果拼接成Redis执行脚本,见:mysql_to_redis.sql
#2、将上一步脚本执行查询结果写入文件(具体的地址,数据库s,账号,密码根据需要设置,mysql_to_redis.sql,mtr.txt路径为MySQL容器路径)
docker exec mysql /bin/bash -c 'mysql -h192.168.137.128 -P3306 -uroot -proot -Dngbilling --skip-column-names --raw < /mydir/mysql_to_redis.sql > /mydir/mtr.txt'
#3、cat数据文件,通过管道符将结果让redis-cli批量刷入redis(具体的地址,数据库,密码根据需要设置,mtr.txt为redis容器根路径)
cat mtr.txt |redis-cli -h 192.168.137.128 -n 0 --pipe -a root

mysql_to_redis.sql

SELECT DISTINCT CONCAT(
  "*3\r\n", #3表示下方字段数量,管道批量执行时截取的依据
  '$', LENGTH('SADD'),'\r\n','SADD','\r\n', #Redis-cmd
  '$', LENGTH('console:biz:cluster:appclouduser'),'\r\n', 'console:biz:cluster:appclouduser','\r\n', #Redis-key
  '$', LENGTH(CONCAT(m_cluster,':',m_user)), '\r\n',CONCAT(m_cluster,':',m_user),'\r' #Redis-value
)
FROM ngbilling.b_cloud_user_map WHERE effective = 1;
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值