Clickhouse备份恢复_Docker环境下的clickhouse如何备份恢复

总结:
Docker环境的下的clickhouse备份,不能使用clickhouse-backup,因为clickhouse-client只能备份Docker环境下的clickhouse的元数据
Docker环境的下的clickhouse备份,可以使用TCP的clickhouse-client的9000或HTTP的8123连接clickhouse服务器后使用backup\restore命令来备份\恢复



[dailachdbud005 ~]# docker ps
CONTAINER ID   IMAGE                                             COMMAND                  CREATED         STATUS      PORTS                                                                                                NAMES  
197a50985f73   clickhouse/clickhouse-server:22.9.3               "/entrypoint.sh"         16 months ago   Up 9 days   0.0.0.0:8123->8123/tcp, :::8123->8123/tcp, 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp, 9009/tcp       root-clickhouse-1
[dailachdbud005 ~]# ps -ef|grep click|grep -v grep
101        2330   2260  6 Jun02 ?        14:52:20 /usr/bin/clickhouse-server --config-file=/etc/clickhouse-server/config.xml

[dailachdbud005 ~]# clickhouse-client -h dailachdbud005 -u onescore --password '123456' --port 9000
select name,path,type from system.disks;
197a50985f73 :) select name,path,type from system.disks;
   ┌─name────┬─path─────────────────┬─type──┐
1. │ backups │ /backups/            │ local │
2. │ default │ /var/lib/clickhouse/ │ local │
   └─────────┴──────────────────────┴───────┘
查看到/backups对应的宿主服务器目录为/var/lib/docker/volumes/root_clickhouse_backup/_data
[dailachdbud005 ~]# docker inspect 197a50985f73 | grep backups -B 3 -A 3
                {
                    "Type": "volume",
                    "Source": "root_clickhouse_backup",
                    "Target": "/backups",
                    "VolumeOptions": {}
                }
            ],
--
                "Type": "volume",
                "Name": "root_clickhouse_backup",
                "Source": "/var/lib/docker/volumes/root_clickhouse_backup/_data",
                "Destination": "/backups",
                "Driver": "local",
                "Mode": "z",
                "RW": true,
--
            "Cmd": null,
            "Image": "clickhouse/clickhouse-server:22.9.3",
            "Volumes": {
                "/backups": {},
                "/etc/clickhouse-server/config.d": {},
                "/etc/clickhouse-server/users.d": {},
                "/var/lib/clickhouse": {}



clickhouse-backup
实践过只能备份元数据,因为clickhouse-backup远程备份,只能备份元数据
[odsonescoredev3 /]# cat /etc/clickhouse-backup/config.yml
general:
  remote_storage: none
  max_file_size: 1099511627776
  disable_progress_bar: false
  backups_to_keep_local: 3
  backups_to_keep_remote: 15
  log_level: info
  allow_empty_backups: false
clickhouse:
  username: onescore
  password: "123456"
  host: localhost
  port: 9000
  disk_mapping: {}
  skip_tables:
  - system.*
  - default.*
  - information_schema.*
  - INFORMATION_SCHEMA.*
  timeout: 5m
  freeze_by_part: false
  secure: false
  skip_verify: false
  sync_replicated_tables: true
  skip_sync_replica_timeouts: true
  log_sql_queries: false
sftp:
  address: "127.0.0.1"
  username: "root"
  password: "D123"
  port: 22
  key: ""
  path: "/mnt/datadomaindir/clickhouse_backup/Dev/ODS1SCHFBDMDEV"
  concurrency: 1
  compression_format: none
  debug: false



[odsonescoredev3 /]# df -h
Filesystem                                                                                          Size  Used Avail Use% Mounted on
devtmpfs                                                                                            7.8G     0  7.8G   0% /dev
tmpfs                                                                                               7.8G   22M  7.8G   1% /dev/shm
tmpfs                                                                                               7.8G  755M  7.1G  10% /run
tmpfs                                                                                               7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/mapper/centos-root                                                                             253G   82G  171G  33% /
/dev/sda1                                                                                           3.8G  233M  3.5G   7% /boot
overlay                                                                                             253G   82G  171G  33% /var/lib/docker/overlay2/d3d9d9a71c2c9553ba3646b987d7b4e30363c2288e8098ace3d8e35aabfd2cd5/merged
overlay                                                                                             253G   82G  171G  33% /var/lib/docker/overlay2/f2bc5bec233a9f6341729d53db0473ed605a72a80ad6b8bd7ffb54b004c81803/merged
overlay                                                                                             253G   82G  171G  33% /var/lib/docker/overlay2/8f4af2f381fc2ed01f9ec3cf7d0c5637d1016f7b0d91cd0f6a84d6e8fb01d40e/merged
overlay                                                                                             253G   82G  171G  33% /var/lib/docker/overlay2/b78e207587d76ebb9fd3969bfb120faf257572d241948e5d5a1632c3efc04349/merged
overlay                                                                                             253G   82G  171G  33% /var/lib/docker/overlay2/b9a729c56ab4c25055feb2d19f1dedeb6921cfa37e7194d5da68b4aaab7230a4/merged
tmpfs                                                                                               1.6G     0  1.6G   0% /run/user/0

[odsonescoredev3 /]# ll /var/lib/clickhouse
total 8
drwxr-x--- 2 101 101 116 Jul 12  2022 access
drwxr-x--- 3 101 101  22 May 13 23:19 backup
drwxr-x--- 4 101 101  35 Jul 12  2022 data
drwxr-x--- 2 101 101   6 Jul 12  2022 dictionaries_lib
drwxr-x--- 2 101 101   6 Jul 12  2022 flags
drwxr-xr-x 2 101 101   6 Jul 12  2022 format_schemas
drwxr-x--- 4 101 101 184 Jul 12  2022 metadata
drwxr-x--- 2 101 101  

### 在 Docker 中部署 ClickHouse 集群 #### 准备工作 为了在 Docker 中成功部署 ClickHouse 集群,需先确保服务器环境已准备好并安装好 DockerDocker Compose。 #### 拉取镜像 首先,拉取官方的 ClickHouse 服务端和客户端镜像: ```bash docker pull yandex/clickhouse-server docker pull yandex/clickhouse-client ``` 这一步骤为后续操作提供了必要的基础镜像[^1]。 #### 修改配置文件 启动一个临时容器来获取默认配置文件,并将其复制到主机上指定位置以便于修改。创建用于存储配置文件的目标路径 `/data/server/clickhouse/conf` 并执行如下命令: ```bash mkdir -p /data/server/clickhouse/conf docker run --rm -d --name=temp-clickhouse-server clickhouse/clickhouse-server:24.10.1.2812 docker cp temp-clickhouse-server:/etc/clickhouse-server/config.xml /data/server/clickhouse/conf/ docker cp temp-clickhouse-server:/etc/clickhouse-server/users.xml /data/server/clickhouse/conf/ ``` 上述过程允许自定义 `config.xml` 文件中的设置项以适应集群需求[^2]。 #### 编写 Docker Compose 文件 编写适用于多节点 ClickHouse 集群的 `docker-compose.yml` 文件。这里提供了一个简单的双节点示例: ```yaml version: '3' services: node1: image: yandex/clickhouse-server container_name: clickhouse-node1 ports: - "8123:8123" - "9000:9000" volumes: - ./node1/data:/var/lib/clickhouse - ./conf:/etc/clickhouse-server environment: CLICKHOUSE_USER: default CLICKHOUSE_PASSWORD: secret node2: image: yandex/clickhouse- "8124:8123" - "9001:9000" volumes: - ./node2/data:/var/lib/clickhouse - ./conf:/etc/clickhouse-server environment: CLICKHOUSE_USER: default CLICKHOUSE_PASSWORD: secret networks: default: driver: bridge ``` 此 YAML 文件描述了两个独立运行的服务实例 (`node1`, `node2`) ,它们共享相同的配置文件夹但各自拥有不同的数据卷映射。 #### 启动集群 完成以上准备工作后,在包含 `docker-compose.yml` 的目录下通过下面指令启动整个集群: ```bash docker-compose up -d ``` 该命令将以守护进程模式后台启动所有定义好的服务实例。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值