root@wyl01:/gsclient# gluster volume remove-brick gv1 gluster004-hf-aiui:/data help
Usage:
volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ... <start|stop|status|commit|force>
root@wyl01:/gsclient# gluster volume remove-brick gv1 gluster004-hf-aiui:/data start
Running remove-brick with cluster.force-migration enabled can result in data corruption. It is safer to disable this option so that files that receive writes during migration are not migrated.
Files that are not migrated can then be manually copied after the remove-brick commit operation.
Do you want to continue with your current cluster.force-migration settings? (y/n) y
volume remove-brick start: success
ID: e30a9e72-53ef-4e79-a394-38dcac9061ba
# 查看移除节点的状态
root@wyl01:/gsclient# gluster volume remove-brick gv1 gluster004-hf-aiui:/data status
Node Rebalanced-files size scanned failures skipped status run time in h:m:s
--------- ----------- ----------- ----------- ----------- ----------- ------------ --------------
192.168.52.125 17 0Bytes 17 0 0 completed 0:00:00
# 移除前先将数据同步到其他brick上
root@wyl01:/gsclient# gluster volume remove-brick gv1 gluster004-hf-aiui:/data commit
volume remove-brick commit: success
Check the removed bricks to ensure all files are migrated.
If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick.
root@wyl01:/gsclient# gluster volume rebalance gv1 start
volume rebalance: gv1: success: Rebalance on gv1 has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: 90df529c-d950-4010-9248-19ffa7c83853
节点的缩容,这里是分布式复制,所以缩容也是成对节点的一起缩容,操作如下:
# 开始移除节点
root@wyl01:/gsclient# gluster volume remove-brick gv1 replica 2 wyl03-hf-aiui:/data wyl04-hf-aiui:/data start
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avaoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
(y/n) y
Running remove-brick with cluster.force-migration enabled can result in data corruption. It is safer to disable this option so that files that receive writes during migration are not migrated.
Files that are not migrated can then be manually copied after the remove-brick commit operation.
Do you want to continue with your current cluster.force-migration settings? (y/n) y
volume remove-brick start: success
ID: d4ce7df1-30c9-4124-9986-c9634986609f
# 移除前先将数据同步到其他brick上
root@wyl01:/gsclient# gluster volume remove-brick gv1 replica 2 wyl03-hf-aiui:/data wyl04-hf-aiui:/data commit
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avaoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
(y/n) y
volume remove-brick commit: success
Check the removed bricks to ensure all files are migrated.
If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick.