两行命令解决小数据量的数据从Redshift到Greenplum的迁移,中间有一些人工操作。认为应当有更优雅的方式。
源表和目标表schema一致,为避免特殊字符问题,谨慎选择分隔符(delimiter)。
Redshift
unload ('SELECT * FROM <source_table> where <clause>') TO 's3://<bucket_name>/<..>/<prefix>_'
CREDENTIALS 'aws_access_key_id=<aws_access_key_id>;aws_secret_access_key=<aws_secret_access_key>'
manifest delimiter '~' allowoverwrite parallel off;
将unload的文件下载到greenplum master机器,例如 /home/pgadmin/test.csv
Greenplum
copy <dest_table> from '/home/gpadmin/test.csv' DELIMITER '~' NULL AS '' CSV LOG ERRORS SEGMENT REJECT LIMIT 100 ROWS
Done.
官方文档:
https://docs.aws.amazon.com/redshift/latest/dg/r_UNLOAD.html
https://gpdb.docs.pivotal.io/550/ref_guide/sql_commands/COPY.html