MySQL数据库迁移到PostgreSQL

分享一下我老师大神的人工智能教程!零基础,通俗易懂!http://blog.csdn.net/jiangjunshow

也欢迎大家转载本篇文章。分享知识,造福人民,实现我们中华民族伟大复兴!

               

MySQL数据库迁移到PostgreSQL

查了不少资料,也尝试了一些,最后采用的办法如下:

 1. 导出mysql表定义(无数据)

mysqldump --no-data [dbname] >dbdef.sql

2. 使用mysql2postgres把脚本转换为pgsql

3. 上面生成的脚本还不一定很完美,可以尝试导入pgsql,调试错误并手动修改之。我遇到的问题就只有一个,mysql列定义中的zerofill需要手工去掉。一些unsinged定义会生成constraint,如果不需要可以去掉。另外,trigger都有问题,只能后面手工重建

4. 导出mysql数据:

mysqldump -v -nt --complete-insert=TRUE --compact --no-create-info --skip-quote-names [dbname] >dbdata.sql

老一些版本的pgsql如果不支持批量插入的话还需要加上--extended-insert=FALSE,这个性能损失巨大。

5. 转义符

mysql默认字符串里的'\'是转义符,而pgsql默认不是,修改postgresql.conf:

backslash_quote = on

escape_string_warning = off

standard_conforming_strings = off

数据导入完成后可以改回默认值。

5. pgsql里导入表定义和数据

psql -d [dbname] <dbdef.sql

psql -d [dbname] <dbdata.sql

6. 重建trigger

7. 自增主键(字段)的处理

由于导入数据时此字段都是有值的,所以pgsql里面seq并不会增加,可以用如下语句设置自增列的当前值:

SELECT setval('sample_id_seq',max(id)) from sample;

 

最后,如果数据量大,导入时考虑性能可以先把主键、索引、约束都去掉,导入完成后再加上。另外,psql客户端从管道导入数据似乎不够快,可以用tcp方式psql -h localhost ,还有一些为大数据量导入优化的参数,大概列一下:

autovacuum = off

wal_level = minimal

archive_mode = off

full_page_writes = off

fsync = off

checkpoint_segments = 50  
checkpoint_timeout = 1h

maintenance_work_mem视内存情况尽量大点


2,

Drupal 6.x. site Migration from MySQL to PostgreSQL (howto)

Alex Tutubalin's picture

I'm not a MySQL fun, so some troubles with MySQL database (MyISAM data corruption and too slow InnoDB) force migration of three existing Drupal 6 sites from MySQL to PostgreSQL. Since Drupal 6.2 some PgSQL-related bugs (sample) in core modules has been fixed, so I have a try.

Drupal 6.5 and PostgreSQL 8.3 compatibility

Drupal Core Modules

I've installed clean Drupal 6.5 on top of PostgreSQL 8.3.3 (with UTF8 database encoding) and played with it a lot. There is no heavy troubles, only minor ones:

  1. My database instance has non-UTF charset set as default client_encoding (in postgresql.conf). It is not possible to change this setting due to large amount of legacy code. It is much simpler to explicitly set connection encoding after connect to database.
    Patch: drupal.pgsql.diff.gz (see also my bug report).
  2. Blob data (in cache_* tables) is not escaped correctly. PostgreSQL requires 'E' prefix before backslash-escaped values. So, database log is filled with warnings (2 lines on each cached element).
    Patch: drupal65-pgsql8x-patch2.diff.gz and and bug report (with same patch attached).
  3. MySQL truncates too long string values (longer than specified in DDL), that violates SQL standards (but more 'user friendly'). PgSQL refuses such records and raises an error. This problem occurs in many places, such as too long module name passed to watchdog() function, too long translated string in Locale module and so on. There is no easy way to fix it, you need to examine database logs and fix Drupal code step by step. I've fixed just one module name ('comment_subscribe' becomes 'comment_subscrib').

Contributed Drupal Modules

I use only ten 3rd-party modules. Most of them works under PgSQL without problems:

  1. Admin Block
  2. GeSHi Filter и GeSHi node
  3. Image, Image assist, Image Attach, ImageMagick Advanced Options
  4. Advanced Help
  5. Site Menu
  6. Taxonomy Menu
  7. CAPTCHA pack, Random CAPTCHA
  8. Tagadelic
  9. Views
Comment Subscribe Module changes

The Comment Subscribe is working with PgSQL only after heavy rewrite. This module uses many non-standard MySQL features such as if/ifnul (instead of case/coalesce), concat (instead of || operator) and multi-table update clause.
Patch: comment_subscribe-pgsql.diff and bug report.

MySQL to PgSQL Drupal Migration

There is simple: you need to convert database data structures, copy data, than change data source in drupal configuration. Sounds easy.

In real life it is not so easy:

  • Backup and Migrate module does not compatible with PgSQL. Amount of required changes is not small.
  • There is several mysql2pgsql conversion scripts does not do the job (I've tried several ones).

So, I opt to alternate way: create new empty site with same data structures, than move data by special migration script.

In Drupal any module may have own data tables in database. So real migration procedure is:

  1. Backup your existing site! Both code and database !!!
  2. Install Drupal from source, add patches (see above), add modules and module patches (if any). If you already have running Drupal site, you need only PgSQL-compatibility patches (see above).

    Also, your existing site should have separate directory under sites/, not just sites/default.
  3. Create any non-used DNS name (i.e. www-new.mysite.com), create new PostgreSQL database, than install Drupal for this hostname on fresh database. You can supply any site data on installation phase, this data will be overwritten on data copy phase.
  4. Enable all modules, enabled on migrating site. Module enabling will create all tables/sequences in your PostgreSQL database.
  5. Run Magic Migration Script (see below). Your new site now have all data copied (with exception of cache_* tables and locale).
  6. If you use Locale module, export your Locale data on source site and import on new site.
  7. Change data source ($db_url in settings.php) in your old site settings.php to new database.
  8. That's all, folks!

Note: You should have shell-access to your server.

Magic drupal-mysql2pgsql.pl Script

Download it here: drupal-mysql2pgsql.pl.gz (do not forget to gunzip it).

The script requires Perl 5.x (I use 5.8, but Perl 5.6 should work) with DBI, DBD:Pg and DBD::mysql libraries.

Usage: drupal-mysql2pgsql.pl source-db dest-db, source-db - source MySQL database,
dest-db - destination PostgreSQL.

You either should passwordless access to both databases or should change the script to add your username/password. This is one-time use script, so I have not added many command-line parsing code.

How it works:

  • Both databases are queried for list of tables. If any table exists in source DB, but not in destination one, the script will warn you.
  • All data in destination tables is erased.
  • Most data from source tables is copied into destination. Locale data and cache not copied.
  • All sequencer initialized to max(column);

All destination data changes is done within transaction.

The script is provided AS IS. It works for me, you can change it as you wish.

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值