本帖最后由 mhabbyo 于 2017-3-18 09:25 编辑
同一台机器上测试没有任何问题,跨机器就不能加入了
第一个节点起来以后,显示状态OK
mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 5e62863b-0ae1-11e7-9d51-fa163e8ed9ca | test1 | 24801 | ONLINE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
1 row in set (0.00 sec)
配置就修改了个IP 都是参照官方文档的 mysql> show global variables like '%group%';
+----------------------------------------------------+----------------------------------------------------------+
| Variable_name | Value |
+----------------------------------------------------+----------------------------------------------------------+
| binlog_group_commit_sync_delay | 0 |
| binlog_group_commit_sync_no_delay_count | 0 |
| group_concat_max_len | 1024 |
| group_replication_allow_local_disjoint_gtids_join | ON |
| group_replication_allow_local_lower_version_join | OFF |
| group_replication_auto_increment_increment | 7 |
| group_replication_bootstrap_group | OFF |
| group_replication_components_stop_timeout | 31536000 |
| group_replication_compression_threshold | 1000000 |
| group_replication_enforce_update_everywhere_checks | OFF |
| group_replication_flow_control_applier_threshold | 25000 |
| group_replication_flow_control_certifier_threshold | 25000 |
| group_replication_flow_control_mode | QUOTA |
| group_replication_force_members | |
| group_replication_group_name | aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa |
| group_replication_group_seeds | 172.19.58.11:24901,172.19.58.12:24902,172.19.58.13:24903 |
| group_replication_gtid_assignment_block_size | 1000000 |
| group_replication_ip_whitelist | 172.19.58.11/24 |
| group_replication_local_address | 127.0.0.1:24901 |
| group_replication_poll_spin_loops | 0 |
| group_replication_recovery_complete_at | TRANSACTIONS_APPLIED |
| group_replication_recovery_reconnect_interval | 60 |
| group_replication_recovery_retry_count | 10 |
| group_replication_recovery_ssl_ca | |
| group_replication_recovery_ssl_capath | |
| group_replication_recovery_ssl_cert | |
| group_replication_recovery_ssl_cipher | |
| group_replication_recovery_ssl_crl | |
| group_replication_recovery_ssl_crlpath | |
| group_replication_recovery_ssl_key | |
| group_replication_recovery_ssl_verify_server_cert | OFF |
| group_replication_recovery_use_ssl | OFF |
| group_replication_single_primary_mode | OFF |
| group_replication_ssl_mode | DISABLED |
| group_replication_start_on_boot | OFF |
| innodb_log_files_in_group | 3 |
| innodb_log_group_home_dir | /home/mysql/my24801/log/iblog |
| slave_checkpoint_group | 512 |
+----------------------------------------------------+----------------------------------------------------------+
38 rows in set (0.00 sec)
第二个节点死活不能加入,一直报错
2017-03-17T23:21:26.354463+08:00 0 [Note] Plugin group_replication reported: 'client connected to 127.0.0.1 24902 fd 71'
2017-03-17T23:21:26.354594+08:00 0 [Note] Plugin group_replication reported: 'connecting to 127.0.0.1 24902'
2017-03-17T23:21:26.354663+08:00 0 [Note] Plugin group_replication reported: 'client connected to 127.0.0.1 24902 fd 73'
2017-03-17T23:21:26.354787+08:00 0 [Note] Plugin group_replication reported: 'connecting to 127.0.0.1 24902'
2017-03-17T23:21:26.354854+08:00 0 [Note] Plugin group_replication reported: 'client connected to 127.0.0.1 24902 fd 75'
2017-03-17T23:21:26.355006+08:00 0 [Note] Plugin group_replication reported: 'connecting to 127.0.0.1 24902'
2017-03-17T23:21:26.355076+08:00 0 [Note] Plugin group_replication reported: 'client connected to 127.0.0.1 24902 fd 77'
2017-03-17T23:21:26.355243+08:00 0 [Note] Plugin group_replication reported: 'connecting to 172.19.58.11 24901'
2017-03-17T23:21:26.355522+08:00 0 [Note] Plugin group_replication reported: 'client connected to 172.19.58.11 24901 fd 79'
2017-03-17T23:21:56.355981+08:00 0 [ERROR] Plugin group_replication reported: '[GCS] Timeout while waiting for the group communication engine to be ready!'
2017-03-17T23:21:56.356064+08:00 0 [ERROR] Plugin group_replication reported: '[GCS] The group communication engine is not ready for the member to join. Local port: 24902'
2017-03-17T23:21:56.356174+08:00 0 [Note] Plugin group_replication reported: 'state 4257 action xa_terminate'
2017-03-17T23:21:56.356202+08:00 0 [Note] Plugin group_replication reported: 'new state x_start'
2017-03-17T23:21:56.356208+08:00 0 [Note] Plugin group_replication reported: 'state 4257 action xa_exit'
2017-03-17T23:21:56.356823+08:00 0 [Note] Plugin group_replication reported: 'Exiting xcom thread'
2017-03-17T23:21:56.356849+08:00 0 [Note] Plugin group_replication reported: 'new state x_start'
2017-03-17T23:21:56.356943+08:00 0 [Warning] Plugin group_replication reported: 'read failed'
2017-03-17T23:21:56.373558+08:00 0 [ERROR] Plugin group_replication reported: '[GCS] The member was unable to join the group. Local port: 24902'
2017-03-17T23:22:26.348810+08:00 4 [ERROR] Plugin group_replication reported: 'Timeout on wait for view after joining group'
2017-03-17T23:22:26.348946+08:00 4 [Note] Plugin group_replication reported: 'Requesting to leave the group despite of not being a member'
2017-03-17T23:22:26.348974+08:00 4 [ERROR] Plugin group_replication reported: '[GCS] The member is leaving a group without being on one.'
2017-03-17T23:22:26.349235+08:00 4 [Note] Plugin group_replication reported: 'auto_increment_increment is reset to 1'
2017-03-17T23:22:26.349255+08:00 4 [Note] Plugin group_replication reported: 'auto_increment_offset is reset to 1'
2017-03-17T23:22:26.349434+08:00 75 [Note] Error reading relay log event for channel 'group_replication_applier': slave SQL thread was killed
2017-03-17T23:22:26.355725+08:00 72 [Note] Plugin group_replication reported: 'The group replication applier thread was killed'