【TIDB】开启tidbv5.1.1 的binlog

1.检查tidb 是否开启binlog; 

mysql>  show global variables like 'log_bin';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| log_bin       | OFF   |
+---------------+-------+
1 row in set (0.02 sec)

--off表示没有开启binlog; 
mysql> show pump status;
Empty set (0.01 sec)

mysql> show drainer status;
Empty set (0.04 sec)

2.开启binlog 

(1)开启binlog 参数化 
[tidb@mysql1 soft]$ cat scale_out_binlog.yaml 
pump_servers:
- host: 192.168.1.14    # 指定要在那台服务器部署
  ssh_port: 22
  port: 8250    # 启动端口
  deploy_dir: /tidb/tidb-deploy/pump-8250 
  data_dir: /tidb/tidb-data/pump-8250
  log_dir: /tidb/tidb-deploy/pump-8250/logs
  config:
    gc: 180 # 日志保留天数
    log-file: /tidb/tidb-deploy/pump-8250/logs
    pd-urls: http://192.168.1.11:2379,http://192.168.1.12:2379,http://192.168.1.13:2379 # 连接pd集群

drainer_servers:
- host: 192.168.1.14    # 指定要在那台服务器部署
  ssh_port: 22
  port: 8249
  deploy_dir: /tidb/tidb-deploy/drainer-8249
  data_dir: /tidb/tidb-data/drainer-8249
  log_dir: /tidb/tidb-deploy/drainer-8249/logs
  config:
    pd-urls: http://192.168.1.11:2379,http://192.168.1.12:2379,http://192.168.1.13:2379
    syncer.db-type: file    # 同步类型
    syncer.to.host: 192.168.1.11        # tidb连接IP
    syncer.to.password: "rootroot"                      # tidb连接密码
    syncer.to.port: 4000                        # tidb连接端口
    syncer.to.user: root                        # tidb连接用户名
    syncer.ignore-schemas: INFORMATION_SCHEMA,PERFORMANCE_SCHEMA,mysql  # 不同步哪些库
    syncer.ignore-table:    # 忽略那些表
      - db-name: "*"            # 所有库中以sc开头的表忽略同步
        tbl-name: "~^sc.*"
      - db-name: "*"            
        tbl-name: "~^tmp.*"     # 所有库中以tmp开头的表忽略同步
      - db-name: "*"
        tbl-name: "~^dim.*"     # 所有库中以dim开头的表忽略同步
      - db-name: "*"
        tbl-name: "~^sync_table_batch"


--将pump_servers,drainer_servers 部署在一个空机器上:14;
--从1.11连接TIDB集群。
tiup cluster scale-out tidbcluster scale_out_binlog.yaml --user root -p'rootroot'
[tidb@mysql1 soft]$ tiup cluster scale-out tidbcluster scale_out_binlog.yaml --user root -p
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.5.3/tiup-cluster scale-out tidbcluster scale_out_binlog.yaml --user root -p
Please confirm your topology:
Cluster type:    tidb
Cluster name:    tidbcluster
Cluster version: v5.1.1
Role     Host          Ports  OS/Arch       Directories
----     ----          -----  -------       -----------
pump     192.168.1.14  8250   linux/x86_64  /tidb/tidb-deploy/pump-8250,/tidb/tidb-data/pump-8250
drainer  192.168.1.14  8249   linux/x86_64  /tidb/tidb-deploy/drainer-8249,/tidb/tidb-data/drainer-8249
Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: (default=N) y
Input SSH password: 
+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa.pub

  - Download drainer:v5.1.1 (linux/amd64) ... Done
+ [ Serial ] - RootSSH: user=root, host=192.168.1.14, port=22
+ [ Serial ] - EnvInit: user=tidb, host=192.168.1.14
+ [ Serial ] - Mkdir: host=192.168.1.14, directories='/tidb/tidb-deploy','/tidb/tidb-data'
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11

+ [ Serial ] - UserSSH: user=tidb, host=192.168.1.14
+ [ Serial ] - Mkdir: host=192.168.1.14, directories='/tidb/tidb-deploy/pump-8250','/tidb/tidb-deploy/pump-8250/bin','/tidb/tidb-deploy/pump-8250/conf','/tidb/tidb-deploy/pump-8250/scripts'
+ [ Serial ] - UserSSH: user=tidb, host=192.168.1.14
+ [ Serial ] - Mkdir: host=192.168.1.14, directories='/tidb/tidb-deploy/drainer-8249','/tidb/tidb-deploy/drainer-8249/bin','/tidb/tidb-deploy/drainer-8249/conf','/tidb/tidb-deploy/drainer-8249/scripts'
  - Copy blackbox_exporter -> 192.168.1.14 ... ⠏ Mkdir: host=192.168.1.14, directories='/tidb/tidb-deploy/monitor-9100','/tidb/tidb-data/monitor-9100','/tidb/tidb-deploy/monitor-9100/log','/t...
  - Copy blackbox_exporter -> 192.168.1.14 ... ⠼ Mkdir: host=192.168.1.14, directories='/tidb/tidb-deploy/monitor-9100','/tidb/tidb-data/monitor-9100','/tidb/tidb-deploy/monitor-9100/log','/t...
  - Copy blackbox_exporter -> 192.168.1.14 ... ⠇ Mkdir: host=192.168.1.14, directories='/tidb/tidb-deploy/monitor-9100','/tidb/tidb-data/monitor-9100','/tidb/tidb-deploy/monitor-9100/log','/t...
  - Copy blackbox_exporter -> 192.168.1.14 ... ⠙ Mkdir: host=192.168.1.14, directories='/tidb/tidb-deploy/monitor-9100','/tidb/tidb-data/monitor-9100','/tidb/tidb-deploy/monitor-9100/log','/t...
  - Copy blackbox_exporter -> 192.168.1.14 ... ⠋ Mkdir: host=192.168.1.14, directories='/tidb/tidb-deploy/monitor-9100','/tidb/tidb-data/monitor-9100','/tidb/tidb-deploy/monitor-9100/log','/t...
  - Copy blackbox_exporter -> 192.168.1.14 ... ⠙ Mkdir: host=192.168.1.14, directories='/tidb/tidb-deploy/monitor-9100','/tidb/tidb-data/monitor-9100','/tidb/tidb-deploy/monitor-9100/log','/t...
  - Copy blackbox_exporter -> 192.168.1.14 ... ⠴ MonitoredConfig: cluster=tidbcluster, user=tidb, node_exporter_port=9100, blackbox_exporter_port=9115, deploy_dir=/tidb/tidb-deploy/monitor-91...
+ [ Serial ] - ScaleConfig: cluster=tidbcluster, user=tidb, host=192.168.1.14, service=pump-8250.service, deploy_dir=/tidb/tidb-deploy/pump-8250, data_dir=[/tidb/tidb-data/pump-8250], log_dir=/t
  - Copy blackbox_exporter -> 192.168.1.14 ... ⠇ MonitoredConfig: cluster=tidbcluster, user=tidb, node_exporter_port=9100, blackbox_exporter_port=9115, deploy_dir=/tidb/tidb-deploy/monitor-91...
+ [ Serial ] - ScaleConfig: cluster=tidbcluster, user=tidb, host=192.168.1.14, service=drainer-8249.service, deploy_dir=/tidb/tidb-deploy/drainer-8249, data_dir=[/tidb/tidb-data/drainer-8249], l
  - Copy node_exporter -> 192.168.1.14 ... Done
+ Check status
Enabling component pump
	Enabling instance 192.168.1.14:8250
	Enable instance 192.168.1.14:8250 success
Enabling component drainer
	Enabling instance 192.168.1.14:8249
	Enable instance 192.168.1.14:8249 success
Enabling component node_exporter
	Enabling instance 192.168.1.14
	Enable 192.168.1.14 success
Enabling component blackbox_exporter
	Enabling instance 192.168.1.14
	Enable 192.168.1.14 success
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.14
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.14
+ [ Serial ] - Save meta
+ [ Serial ] - StartCluster
Starting component pump
	Starting instance 192.168.1.14:8250
	Start instance 192.168.1.14:8250 success
Starting component drainer
	Starting instance 192.168.1.14:8249
	Start instance 192.168.1.14:8249 success
Starting component node_exporter
	Starting instance 192.168.1.14
	Start 192.168.1.14 success
Starting component blackbox_exporter
	Starting instance 192.168.1.14
	Start 192.168.1.14 success
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.1.11, path=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache/tispark-worker-7078.service, deploy_dir=/tidb/tidb-deploy/tispark-worker-7078, data_dir=[], log_dir=/tidb/tidb-deploy/tispark-worker-7078/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.1.11, path=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache/pd-2379.service, deploy_dir=/tidb/tidb-deploy/pd-2379, data_dir=[/tidb/tidb-data/pd-2379], log_dir=/tidb/tidb-deploy/pd-2379/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.1.12, path=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache/pd-2379.service, deploy_dir=/tidb/tidb-deploy/pd-2379, data_dir=[/tidb/tidb-data/pd-2379], log_dir=/tidb/tidb-deploy/pd-2379/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.1.13, path=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache/pd-2379.service, deploy_dir=/tidb/tidb-deploy/pd-2379, data_dir=[/tidb/tidb-data/pd-2379], log_dir=/tidb/tidb-deploy/pd-2379/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.1.11, path=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache/tikv-20160.service, deploy_dir=/tidb/tidb-deploy/tikv-20160, data_dir=[/tidb/tidb-data/tikv-20160], log_dir=/tidb/tidb-deploy/tikv-20160/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.1.12, path=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache/tikv-20161.service, deploy_dir=/tidb/tidb-deploy/tikv-20161, data_dir=[/tidb/tidb-data/tikv-20161], log_dir=/tidb/tidb-deploy/tikv-20161/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.1.13, path=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache/tikv-20160.service, deploy_dir=/tidb/tidb-deploy/tikv-20160, data_dir=[/tidb/tidb-data/tikv-20160], log_dir=/tidb/tidb-deploy/tikv-20160/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.1.14, path=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache/pump-8250.service, deploy_dir=/tidb/tidb-deploy/pump-8250, data_dir=[/tidb/tidb-data/pump-8250], log_dir=/tidb/tidb-deploy/pump-8250/logs, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.1.11, path=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache/tidb-4000.service, deploy_dir=/tidb/tidb-deploy/tidb-4000, data_dir=[], log_dir=/tidb/tidb-deploy/tidb-4000/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.1.12, path=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache/tidb-4001.service, deploy_dir=/tidb/tidb-deploy/tidb-4001, data_dir=[], log_dir=/tidb/tidb-deploy/tidb-4001/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.1.13, path=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache/tidb-4000.service, deploy_dir=/tidb/tidb-deploy/tidb-4000, data_dir=[], log_dir=/tidb/tidb-deploy/tidb-4000/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.1.11, path=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache/tiflash-9000.service, deploy_dir=/tidb/tidb-deploy/tiflash-9000, data_dir=[/tidb/tidb-data/tiflash-9000], log_dir=/tidb/tidb-deploy/tiflash-9000/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.1.14, path=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache/drainer-8249.service, deploy_dir=/tidb/tidb-deploy/drainer-8249, data_dir=[/tidb/tidb-data/drainer-8249], log_dir=/tidb/tidb-deploy/drainer-8249/logs, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.1.11, path=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache/cdc-8300.service, deploy_dir=/tidb/tidb-deploy/cdc-8300, data_dir=[/tidb/tidb-data/cdc-8300], log_dir=/tidb/tidb-deploy/cdc-8300/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.1.11, path=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache/prometheus-9090.service, deploy_dir=/tidb/tidb-deploy/prometheus-9090, data_dir=[/tidb/tidb-data/prometheus-9090], log_dir=/tidb/tidb-deploy/prometheus-9090/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.1.11, path=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache/grafana-3000.service, deploy_dir=/tidb/tidb-deploy/grafana-3000, data_dir=[], log_dir=/tidb/tidb-deploy/grafana-3000/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.1.11, path=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache/alertmanager-9093.service, deploy_dir=/tidb/tidb-deploy/alertmanager-9093, data_dir=[/tidb/tidb-data/alertmanager-9093], log_dir=/tidb/tidb-deploy/alertmanager-9093/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.1.11, path=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache/tispark-master-7077.service, deploy_dir=/tidb/tidb-deploy/tispark-master-7077, data_dir=[], log_dir=/tidb/tidb-deploy/tispark-master-7077/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - SystemCtl: host=192.168.1.11 action=reload prometheus-9090.service
+ [ Serial ] - UpdateTopology: cluster=tidbcluster
Scaled cluster `tidbcluster` out successfully

(2)编辑配置文件 
tiup cluster edit-config tidbcluster 

server_configs:
  tidb:
    binlog.enable: true
    binlog.ignore-error: true

(3)重启
tiup cluster reload tidbcluster -R tidb
--如下两个命令貌似不需要。重载集群后binlog已经开启。
--tiup cluster reload tidbcluster -R pump
--tiup cluster reload tidbcluster -R drainer

[tidb@mysql1 soft]$ tiup cluster reload tidbcluster -R tidb
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.5.3/tiup-cluster reload tidbcluster -R tidb
Will reload the cluster tidbcluster with restart policy is true, nodes: , roles: tidb.
Do you want to continue? [y/N]:(default=N) y
+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.14
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.12
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.13
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.14
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.11
+ [ Serial ] - UpdateTopology: cluster=tidbcluster
+ Refresh instance configs
  - Regenerate config pd -> 192.168.1.11:2379 ... Done
  - Regenerate config pd -> 192.168.1.12:2379 ... Done
  - Regenerate config pd -> 192.168.1.13:2379 ... Done
  - Regenerate config tikv -> 192.168.1.11:20160 ... Done
  - Regenerate config tikv -> 192.168.1.12:20161 ... Done
  - Regenerate config tikv -> 192.168.1.13:20160 ... Done
  - Regenerate config pump -> 192.168.1.14:8250 ... Done
  - Regenerate config tidb -> 192.168.1.11:4000 ... Done
  - Regenerate config tidb -> 192.168.1.12:4001 ... Done
  - Regenerate config tidb -> 192.168.1.13:4000 ... Done
  - Regenerate config tiflash -> 192.168.1.11:9000 ... Done
  - Regenerate config drainer -> 192.168.1.14:8249 ... Done
  - Regenerate config cdc -> 192.168.1.11:8300 ... Done
  - Regenerate config prometheus -> 192.168.1.11:9090 ... Done
  - Regenerate config grafana -> 192.168.1.11:3000 ... Done
  - Regenerate config alertmanager -> 192.168.1.11:9093 ... Done
  - Regenerate config tispark -> 192.168.1.11:7077 ... Done
  - Regenerate config tispark -> 192.168.1.11:7078 ... Done
+ Refresh monitor configs
  - Refresh config node_exporter -> 192.168.1.11 ... Done
  - Refresh config node_exporter -> 192.168.1.12 ... Done
  - Refresh config node_exporter -> 192.168.1.13 ... Done
  - Refresh config node_exporter -> 192.168.1.14 ... Done
  - Refresh config blackbox_exporter -> 192.168.1.13 ... Done
  - Refresh config blackbox_exporter -> 192.168.1.14 ... Done
  - Refresh config blackbox_exporter -> 192.168.1.11 ... Done
  - Refresh config blackbox_exporter -> 192.168.1.12 ... Done
+ [ Serial ] - UpgradeCluster
Upgrading component tidb
	Restarting instance 192.168.1.11:4000
	Restart instance 192.168.1.11:4000 success
	Restarting instance 192.168.1.12:4001
	Restart instance 192.168.1.12:4001 success
	Restarting instance 192.168.1.13:4000
	Restart instance 192.168.1.13:4000 success
Reloaded cluster `tidbcluster` successfully

--检查pumper,trainer服务。
[tidb@mysql4 tidb-data]$ ps -ef |grep tidb
root      85256   2249  0 17:37 pts/0    00:00:00 su - tidb
tidb      85257  85256  0 17:37 pts/0    00:00:00 -bash
tidb      89804      1  0 17:42 ?        00:00:01 bin/pump --node-id=192.168.1.14:8250 --addr=0.0.0.0:8250 --advertise-addr=192.168.1.14:8250 --pd-urls=http://192.168.1.11:2379,http://192.168.1.12:2379,http://192.168.1.13:2379 --data-dir=/tidb/tidb-data/pump-8250 --log-file=/tidb/tidb-deploy/pump-8250/logs/pump.log --config=conf/pump.toml
tidb      89897      1  0 17:42 ?        00:00:00 bin/drainer --node-id=192.168.1.14:8249 --addr=192.168.1.14:8249 --pd-urls=http://192.168.1.11:2379,http://192.168.1.12:2379,http://192.168.1.13:2379 --data-dir=/tidb/tidb-data/drainer-8249 --log-file=/tidb/tidb-deploy/drainer-8249/logs/drainer.log --config=conf/drainer.toml --initial-commit-ts=0
tidb      89990      1  0 17:42 ?        00:00:01 bin/node_exporter/node_exporter --web.listen-address=:9100 --collector.tcpstat --collector.systemd --collector.mountstats --collector.meminfo_numa --collector.interrupts --collector.buddyinfo --collector.vmstat.fields=^.* --log.level=info
tidb      89991  89990  0 17:42 ?        00:00:00 /bin/bash /tidb/tidb-deploy/monitor-9100/scripts/run_node_exporter.sh
tidb      89994  89991  0 17:42 ?        00:00:00 tee -i -a /tidb/tidb-deploy/monitor-9100/log/node_exporter.log
tidb      90084      1  0 17:42 ?        00:00:00 bin/blackbox_exporter/blackbox_exporter --web.listen-address=:9115 --log.level=info --config.file=conf/blackbox.yml
tidb      90085  90084  0 17:42 ?        00:00:00 /bin/bash /tidb/tidb-deploy/monitor-9100/scripts/run_blackbox_exporter.sh
tidb      90088  90085  0 17:42 ?        00:00:00 tee -i -a /tidb/tidb-deploy/monitor-9100/log/blackbox_exporter.log

--当前已经开启binlog;
mysql>  show global variables like 'log_bin';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| log_bin       | ON    |
+---------------+-------+
1 row in set (0.01 sec)

--检查已经生成的binlog; 
[tidb@mysql4 drainer-8249]$ ll
total 4
-rw------- 1 tidb tidb  0 Aug 17 17:42 binlog-0000000000000000-20240817174214
-rw-r--r-- 1 tidb tidb 68 Aug 17 17:48 savepoint
[tidb@mysql4 drainer-8249]$ pwd
/tidb/tidb-data/drainer-8249

 

3.写入数据测试

mysql> select * from my_test1; 
+------+------+
| id   | name |
+------+------+
|    1 | xs1  |
|    2 | xsq1 |
+------+------+
2 rows in set (0.00 sec)

mysql> insert into my_test1 values(3,'xsq3'),(4,'xsq4');
Query OK, 2 rows affected (0.02 sec)
Records: 2  Duplicates: 0  Warnings: 0

--观察文件系统。
[tidb@mysql4 drainer-8249]$ ll
total 4
-rw------- 1 tidb tidb  0 Aug 17 17:42 binlog-0000000000000000-20240817174214
-rw-r--r-- 1 tidb tidb 68 Aug 17 17:52 savepoint
[tidb@mysql4 drainer-8249]$ ll
total 8
-rw------- 1 tidb tidb 163 Aug 17 17:52 binlog-0000000000000000-20240817174214
-rw-r--r-- 1 tidb tidb  69 Aug 17 17:52 savepoint
--已经写入成功。
[tidb@mysql4 drainer-8249]$ strings binlog-0000000000000000-20240817174214 
test
my_test1
int"
name
varchar"
xsq3
test
my_test1
int"
name
varchar"
xsq4

4.总结 

开启binlog 涉及到集群的重启,所以需要在非业务时间进行。

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值