数据库必知必会:TiDB(11)TiDB集群安装

TiDB集群安装

TiDB集群的安装,需要先安装一台中控机,然后通过中控机进行集群的安装及管理。

单机环境上安装集群

单机环境集群式将所有节点都安装在同一台服务器上。

在集群中,PD实例需要有3个,TiKV实例需要有3个,其余的实例可以只保留1个。

安装过程中,需要先安装中控机,然后通过中控机安装、管理集群。

下载并安装TiUP工具

通过命令下载并安装TiUP工具。

curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh

安装过程为:

wux_labs@wux-labs-vm:~$ curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 7088k  100 7088k    0     0  1483k      0  0:00:04  0:00:04 --:--:-- 1561k
WARN: adding root certificate via internet: https://tiup-mirrors.pingcap.com/root.json
You can revoke this by remove /home/wux_labs/.tiup/bin/7b8e153f2e2d0928.root.json
Successfully set mirror to https://tiup-mirrors.pingcap.com
Detected shell: bash
Shell profile:  /home/wux_labs/.bashrc
/home/wux_labs/.bashrc has been modified to add tiup to PATH
open a new terminal or source /home/wux_labs/.bashrc to use it
Installed path: /home/wux_labs/.tiup/bin/tiup
===============================================
Have a try:     tiup playground
===============================================
wux_labs@wux-labs-vm:~$

image-20230215225428932

安装完成后,命令行中提示需要执行source /home/wux_labs/.bashrc让环境变量生效,实际上这是因为在执行installs.sh的时候往这个文件中追加了:

export PATH=/home/wux_labs/.tiup/bin:$PATH

按提示执行命令让环境变量生效即可,此时,tiup命令被添加到环境变量PATH中。

安装TiUP cluster组件

执行以下命令安装cluster组件:

tiup cluster

安装过程为:

wux_labs@wux-labs-vm:~$ tiup cluster
tiup is checking updates for component cluster ...timeout(2s)!
The component `cluster` version  is not installed; downloading from repository.
download https://tiup-mirrors.pingcap.com/cluster-v1.11.3-linux-amd64.tar.gz 8.44 MiB / 8.44 MiB 100.00% 7.00 MiB/s                                   
Starting component `cluster`: /home/wux_labs/.tiup/components/cluster/v1.11.3/tiup-cluster
Deploy a TiDB cluster for production

Usage:
  tiup cluster [command]

Available Commands:
  check       Perform preflight checks for the cluster.
  deploy      Deploy a cluster for production
  start       Start a TiDB cluster
  stop        Stop a TiDB cluster
  restart     Restart a TiDB cluster
  scale-in    Scale in a TiDB cluster
  scale-out   Scale out a TiDB cluster
  destroy     Destroy a specified cluster
  clean       (EXPERIMENTAL) Cleanup a specified cluster
  upgrade     Upgrade a specified TiDB cluster
  display     Display information of a TiDB cluster
  prune       Destroy and remove instances that is in tombstone state
  list        List all clusters
  audit       Show audit log of cluster operation
  import      Import an exist TiDB cluster from TiDB-Ansible
  edit-config Edit TiDB cluster config
  show-config Show TiDB cluster config
  reload      Reload a TiDB cluster's config and restart if needed
  patch       Replace the remote package with a specified package and restart the service
  rename      Rename the cluster
  enable      Enable a TiDB cluster automatically at boot
  disable     Disable automatic enabling of TiDB clusters at boot
  replay      Replay previous operation and skip successed steps
  template    Print topology template
  tls         Enable/Disable TLS between TiDB components
  meta        backup/restore meta information
  help        Help about any command
  completion  Generate the autocompletion script for the specified shell

Flags:
  -c, --concurrency int     max number of parallel tasks allowed (default 5)
      --format string       (EXPERIMENTAL) The format of output, available values are [default, json] (default "default")
  -h, --help                help for tiup
      --ssh string          (EXPERIMENTAL) The executor type: 'builtin', 'system', 'none'.
      --ssh-timeout uint    Timeout in seconds to connect host via SSH, ignored for operations that don't need an SSH connection. (default 5)
  -v, --version             version for tiup
      --wait-timeout uint   Timeout in seconds to wait for an operation to complete, ignored for operations that don't fit. (default 120)
  -y, --yes                 Skip all confirmations and assumes 'yes'

Use "tiup cluster help [command]" for more information about a command.
wux_labs@wux-labs-vm:~$

这样TiUP cluster组件就安装完成。

创建拓扑文件

为了防止配置错误、提高配置效率,可以通过命令生成拓扑文件模板,然后基于模板修改拓扑配置。命令如下:

tiup cluster template > topology.yaml

image-20230215230633664

由于是在一台服务器上部署多个实例,所以PD、TiKV的多个实例之间需要用不同的端口来进行区分,最终修改后的拓扑文件为:

global:
  user: "tidb"
  ssh_port: 22
  deploy_dir: "/tidb-deploy"
  data_dir: "/tidb-data"
  arch: "amd64"

monitored:
  node_exporter_port: 9100
  blackbox_exporter_port: 9115

server_configs:
 tidb:
   log.slow-threshold: 300
 tikv:
   readpool.storage.use-unified-pool: false
   readpool.coprocessor.use-unified-pool: true
 pd:
   replication.enable-placement-rules: true
   replication.location-labels: ["host"]

pd_servers:
  - host: wux-labs-vm
    client_port: 23791
    peer_port: 23801
    deploy_dir: "/tidb-deploy/pd-23791"
    data_dir: "/tidb-data/pd-23791"
    log_dir: "/tidb-deploy/pd-23791/log"
    config:
      server.labels: { host: "logic-host-1" }
  - host: wux-labs-vm
    client_port: 23792
    peer_port: 23802
    deploy_dir: "/tidb-deploy/pd-23792"
    data_dir: "/tidb-data/pd-23792"
    log_dir: "/tidb-deploy/pd-23792/log"
  - host: wux-labs-vm
    client_port: 23793
    peer_port: 23803
    deploy_dir: "/tidb-deploy/pd-23793"
    data_dir: "/tidb-data/pd-23793"
    log_dir: "/tidb-deploy/pd-23793/log"

tidb_servers:
  - host: wux-labs-vm

tikv_servers:
  - host: wux-labs-vm
    port: 20161
    status_port: 20181
    deploy_dir: "/tidb-deploy/tikv-20161"
    data_dir: "/tidb-data/tikv-20161"
    log_dir: "/tidb-deploy/tikv-20161/log"
    config:
      server.labels: { host: "logic-host-1" }
  - host: wux-labs-vm
    port: 20162
    status_port: 20182
    deploy_dir: "/tidb-deploy/tikv-20162"
    data_dir: "/tidb-data/tikv-20162"
    log_dir: "/tidb-deploy/tikv-20162/log"
    config:
      server.labels: { host: "logic-host-2" }
  - host: wux-labs-vm
    port: 20163
    status_port: 20183
    deploy_dir: "/tidb-deploy/tikv-20163"
    data_dir: "/tidb-data/tikv-20163"
    log_dir: "/tidb-deploy/tikv-20163/log"
    config:
      server.labels: { host: "logic-host-3" }
    
tiflash_servers:
  - host: wux-labs-vm
  
monitoring_servers:
  - host: wux-labs-vm
  
grafana_servers:
  - host: wux-labs-vm
  
alertmanager_servers:
  - host: wux-labs-vm

配置SSH免密登录

由于是通过中控机安装、管理集群,虽然是单机环境的集群,也需要配置一下SSH免密登录。

wux_labs@wux-labs-vm:~$ ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/home/wux_labs/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/wux_labs/.ssh/id_rsa
Your public key has been saved in /home/wux_labs/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:sKT4o0ISwCqLtL0cVm4yFTFQS19FJbp3FsTyXfA/Gmg wux_labs@wux-labs-vm
The key's randomart image is:
+---[RSA 3072]----+
|.  .o=.  .o+oo.. |
|..  ..+ . ..o. ..|
|o    .+. .  o.. o|
|+. . = o  . .....|
|+o+ = . S. E + ..|
|+o B o    o o o .|
|o o O        .   |
|.  + .           |
| ..              |
+----[SHA256]-----+
wux_labs@wux-labs-vm:~$ 

image-20230215232452112

检查安装要求

在安装集群之前,为了确保集群安装成功,需要先检查一下服务器是否满足安装集群的要求。执行以下命令进行检查:

tiup cluster check ./topology.yaml

检查过程为:

wux_labs@wux-labs-vm:~$ tiup cluster check ./topology.yaml
tiup is checking updates for component cluster ...
Starting component `cluster`: /home/wux_labs/.tiup/components/cluster/v1.11.3/tiup-cluster check ./topology.yaml

+ Detect CPU Arch Name
  - Detecting node wux-labs-vm Arch info ... Done

+ Detect CPU OS Name
  - Detecting node wux-labs-vm OS info ... Done
+ Download necessary tools
  - Downloading check tools for linux/amd64 ... Done
+ Collect basic system information
+ Collect basic system information
  - Getting system info of wux-labs-vm:22 ... Done
+ Check time zone
  - Checking node wux-labs-vm ... Done
+ Check system requirements
+ Check system requirements
+ Check system requirements
+ Check system requirements
  - Checking node wux-labs-vm ... Done
  - Checking node wux-labs-vm ... Done
  - Checking node wux-labs-vm ... Done
  - Checking node wux-labs-vm ... Done
  - Checking node wux-labs-vm ... Done
  - Checking node wux-labs-vm ... Done
  - Checking node wux-labs-vm ... Done
  - Checking node wux-labs-vm ... Done
  - Checking node wux-labs-vm ... Done
  - Checking node wux-labs-vm ... Done
  - Checking node wux-labs-vm ... Done
  - Checking node wux-labs-vm ... Done
+ Cleanup check files
  - Cleanup check files on wux-labs-vm:22 ... Done
Node         Check         Result  Message
----         -----         ------  -------
wux-labs-vm  sysctl        Fail    net.ipv4.tcp_syncookies = 1, should be 0
wux-labs-vm  sysctl        Fail    vm.swappiness = 60, should be 0
wux-labs-vm  sysctl        Fail    net.core.somaxconn = 4096, should be greater than 32768
wux-labs-vm  thp           Fail    THP is enabled, please disable it for best performance
wux-labs-vm  command       Fail    numactl not usable, bash: numactl: command not found
wux-labs-vm  os-version    Warn    OS is Ubuntu 20.04.5 LTS 20.04.5 (ubuntu support is not fully tested, be careful)
wux-labs-vm  cpu-cores     Pass    number of CPU cores / threads: 2
wux-labs-vm  memory        Pass    memory size is 8192MB
wux-labs-vm  selinux       Pass    SELinux is disabled
wux-labs-vm  service       Pass    service firewalld not found, ignore
wux-labs-vm  cpu-governor  Warn    Unable to determine current CPU frequency governor policy
wux-labs-vm  network       Pass    network speed of enP58751s1 is 50000MB
wux-labs-vm  network       Pass    network speed of eth0 is 50000MB
wux-labs-vm  limits        Fail    soft limit of 'nofile' for user 'tidb' is not set or too low
wux-labs-vm  limits        Fail    hard limit of 'nofile' for user 'tidb' is not set or too low
wux-labs-vm  limits        Fail    soft limit of 'stack' for user 'tidb' is not set or too low
wux_labs@wux-labs-vm:~$ 

image-20230215232908708

可以看到,检查结果中有失败的项目Fail的。

这里可以手工修复不满足的项,也可以通过以下命令在检查的时候自动修复不满足的。

tiup cluster check ./topology.yaml --apply

命令执行以后,会重复刚才的检查动作,并且在检查完后会多出一步修复的动作,修复不满足的项。

image-20230215233517569

修复完成后再次执行检查,确保配置项都是满足要求的。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-oBTP8oCO-1676959836469)(https://gitee.com/wux-labs/Blogs/raw/master/images/image-20230215233846044.png)]

这里还差一个numactl软件需要手动安装,在Ubuntu 20.04系统中,执行命令安装软件:

sudo apt-get install numactl

image-20230215234331368

软件安装完成后,再次检查一下,最终检查结果为全部通过。

image-20230215234514221

创建安装目录

由于我们指定了集群部署的目录是

  deploy_dir: "/tidb-deploy"
  data_dir: "/tidb-data"

但是我们当前使用的用户并非root用户,所以需要先手动创建一下安装目录。

sudo mkdir /tidb-deploy /tidb-data
sudo chmod 777 /tidb-data /tidb-deploy

image-20230215235849181

部署集群

所有检查项都通过以后,通过deploy部署命令部署TiDB集群,其中cluster1是部署的集群的名字。

tiup cluster deploy cluster1 v6.1.0 ./topology.yaml

在等待确认的地方输入y,确认继续安装。

image-20230216001011887

整个安装过程为:

wux_labs@wux-labs-vm:~$ tiup cluster deploy cluster1 v6.1.0 ./topology.yaml
tiup is checking updates for component cluster ...
Starting component `cluster`: /home/wux_labs/.tiup/components/cluster/v1.11.3/tiup-cluster deploy cluster1 v6.1.0 ./topology.yaml

+ Detect CPU Arch Name
  - Detecting node wux-labs-vm Arch info ... Done

+ Detect CPU OS Name
  - Detecting node wux-labs-vm OS info ... Done
Please confirm your topology:
Cluster type:    tidb
Cluster name:    cluster1
Cluster version: v6.1.0
Role          Host         Ports                            OS/Arch       Directories
----          ----         -----                            -------       -----------
pd            wux-labs-vm  23791/23801                      linux/x86_64  /tidb-deploy/pd-23791,/tidb-data/pd-23791
pd            wux-labs-vm  23792/23802                      linux/x86_64  /tidb-deploy/pd-23792,/tidb-data/pd-23792
pd            wux-labs-vm  23793/23803                      linux/x86_64  /tidb-deploy/pd-23793,/tidb-data/pd-23793
tikv          wux-labs-vm  20161/20181                      linux/x86_64  /tidb-deploy/tikv-20161,/tidb-data/tikv-20161
tikv          wux-labs-vm  20162/20182                      linux/x86_64  /tidb-deploy/tikv-20162,/tidb-data/tikv-20162
tikv          wux-labs-vm  20163/20183                      linux/x86_64  /tidb-deploy/tikv-20163,/tidb-data/tikv-20163
tidb          wux-labs-vm  4000/10080                       linux/x86_64  /tidb-deploy/tidb-4000
tiflash       wux-labs-vm  9000/8123/3930/20170/20292/8234  linux/x86_64  /tidb-deploy/tiflash-9000,/tidb-data/tiflash-9000
prometheus    wux-labs-vm  9090/12020                       linux/x86_64  /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
grafana       wux-labs-vm  3000                             linux/x86_64  /tidb-deploy/grafana-3000
alertmanager  wux-labs-vm  9093/9094                        linux/x86_64  /tidb-deploy/alertmanager-9093,/tidb-data/alertmanager-9093
Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: (default=N) y
+ Generate SSH keys ... Done
+ Download TiDB components
  - Download pd:v6.1.0 (linux/amd64) ... Done
  - Download tikv:v6.1.0 (linux/amd64) ... Done
  - Download tidb:v6.1.0 (linux/amd64) ... Done
  - Download tiflash:v6.1.0 (linux/amd64) ... Done
  - Download prometheus:v6.1.0 (linux/amd64) ... Done
  - Download grafana:v6.1.0 (linux/amd64) ... Done
  - Download alertmanager: (linux/amd64) ... Done
  - Download node_exporter: (linux/amd64) ... Done
  - Download blackbox_exporter: (linux/amd64) ... Done
+ Initialize target host environments
  - Prepare wux-labs-vm:22 ... Done
+ Deploy TiDB instance
  - Copy pd -> wux-labs-vm ... Done
  - Copy pd -> wux-labs-vm ... Done
  - Copy pd -> wux-labs-vm ... Done
  - Copy tikv -> wux-labs-vm ... Done
  - Copy tikv -> wux-labs-vm ... Done
  - Copy tikv -> wux-labs-vm ... Done
  - Copy tidb -> wux-labs-vm ... Done
  - Copy tiflash -> wux-labs-vm ... Done
  - Copy prometheus -> wux-labs-vm ... Done
  - Copy grafana -> wux-labs-vm ... Done
  - Copy alertmanager -> wux-labs-vm ... Done
  - Deploy node_exporter -> wux-labs-vm ... Done
  - Deploy blackbox_exporter -> wux-labs-vm ... Done
+ Copy certificate to remote host
+ Init instance configs
  - Generate config pd -> wux-labs-vm:23791 ... Done
  - Generate config pd -> wux-labs-vm:23792 ... Done
  - Generate config pd -> wux-labs-vm:23793 ... Done
  - Generate config tikv -> wux-labs-vm:20161 ... Done
  - Generate config tikv -> wux-labs-vm:20162 ... Done
  - Generate config tikv -> wux-labs-vm:20163 ... Done
  - Generate config tidb -> wux-labs-vm:4000 ... Done
  - Generate config tiflash -> wux-labs-vm:9000 ... Done
  - Generate config prometheus -> wux-labs-vm:9090 ... Done
  - Generate config grafana -> wux-labs-vm:3000 ... Done
  - Generate config alertmanager -> wux-labs-vm:9093 ... Done
+ Init monitor configs
  - Generate config node_exporter -> wux-labs-vm ... Done
  - Generate config blackbox_exporter -> wux-labs-vm ... Done
Enabling component pd
        Enabling instance wux-labs-vm:23793
        Enabling instance wux-labs-vm:23792
        Enabling instance wux-labs-vm:23791
        Enable instance wux-labs-vm:23791 success
        Enable instance wux-labs-vm:23792 success
        Enable instance wux-labs-vm:23793 success
Enabling component tikv
        Enabling instance wux-labs-vm:20163
        Enabling instance wux-labs-vm:20161
        Enabling instance wux-labs-vm:20162
        Enable instance wux-labs-vm:20163 success
        Enable instance wux-labs-vm:20162 success
        Enable instance wux-labs-vm:20161 success
Enabling component tidb
        Enabling instance wux-labs-vm:4000
        Enable instance wux-labs-vm:4000 success
Enabling component tiflash
        Enabling instance wux-labs-vm:9000
        Enable instance wux-labs-vm:9000 success
Enabling component prometheus
        Enabling instance wux-labs-vm:9090
        Enable instance wux-labs-vm:9090 success
Enabling component grafana
        Enabling instance wux-labs-vm:3000
        Enable instance wux-labs-vm:3000 success
Enabling component alertmanager
        Enabling instance wux-labs-vm:9093
        Enable instance wux-labs-vm:9093 success
Enabling component node_exporter
        Enabling instance wux-labs-vm
        Enable wux-labs-vm success
Enabling component blackbox_exporter
        Enabling instance wux-labs-vm
        Enable wux-labs-vm success
Cluster `cluster1` deployed successfully, you can start it with command: `tiup cluster start cluster1 --init`
wux_labs@wux-labs-vm:~$

至此,集群就算是安装完成了,接下来就可以启动集群了。

启动集群

部署完成后,可以通过命令查看已安装的集群的信息。

  • 列出集群列表
tiup cluster list

image-20230216001635891

  • 查看集群状态
tiup cluster display cluster1

image-20230216001820070

可以看到,当前集群中有11个实例节点,但是都没有启动。

按照提示可以启动集群,其中--init表示安全启动,启动后会给数据库的root用户生成一个密码。

tiup cluster start cluster1 --init

启动过程为:

wux_labs@wux-labs-vm:~$ tiup cluster start cluster1 --init
tiup is checking updates for component cluster ...
Starting component `cluster`: /home/wux_labs/.tiup/components/cluster/v1.11.3/tiup-cluster start cluster1 --init
Starting cluster cluster1...
+ [ Serial ] - SSHKeySet: privateKey=/home/wux_labs/.tiup/storage/cluster/clusters/cluster1/ssh/id_rsa, publicKey=/home/wux_labs/.tiup/storage/cluster/clusters/cluster1/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=wux-labs-vm
+ [Parallel] - UserSSH: user=tidb, host=wux-labs-vm
+ [Parallel] - UserSSH: user=tidb, host=wux-labs-vm
+ [Parallel] - UserSSH: user=tidb, host=wux-labs-vm
+ [Parallel] - UserSSH: user=tidb, host=wux-labs-vm
+ [Parallel] - UserSSH: user=tidb, host=wux-labs-vm
+ [Parallel] - UserSSH: user=tidb, host=wux-labs-vm
+ [Parallel] - UserSSH: user=tidb, host=wux-labs-vm
+ [Parallel] - UserSSH: user=tidb, host=wux-labs-vm
+ [Parallel] - UserSSH: user=tidb, host=wux-labs-vm
+ [Parallel] - UserSSH: user=tidb, host=wux-labs-vm
+ [ Serial ] - StartCluster
Starting component pd
        Starting instance wux-labs-vm:23793
        Starting instance wux-labs-vm:23791
        Starting instance wux-labs-vm:23792
        Start instance wux-labs-vm:23792 success
        Start instance wux-labs-vm:23791 success
        Start instance wux-labs-vm:23793 success
Starting component tikv
        Starting instance wux-labs-vm:20163
        Starting instance wux-labs-vm:20161
        Starting instance wux-labs-vm:20162
        Start instance wux-labs-vm:20162 success
        Start instance wux-labs-vm:20163 success
        Start instance wux-labs-vm:20161 success
Starting component tidb
        Starting instance wux-labs-vm:4000
        Start instance wux-labs-vm:4000 success
Starting component tiflash
        Starting instance wux-labs-vm:9000
        Start instance wux-labs-vm:9000 success
Starting component prometheus
        Starting instance wux-labs-vm:9090
        Start instance wux-labs-vm:9090 success
Starting component grafana
        Starting instance wux-labs-vm:3000
        Start instance wux-labs-vm:3000 success
Starting component alertmanager
        Starting instance wux-labs-vm:9093
        Start instance wux-labs-vm:9093 success
Starting component node_exporter
        Starting instance wux-labs-vm
        Start wux-labs-vm success
Starting component blackbox_exporter
        Starting instance wux-labs-vm
        Start wux-labs-vm success
+ [ Serial ] - UpdateTopology: cluster=cluster1
Started cluster `cluster1` successfully
The root password of TiDB database has been changed.
The new password is: '@2XKr^+9&nNZ3U07q6'.
Copy and record it to somewhere safe, it is only displayed once, and will not be stored.
The generated password can NOT be get and shown again.
wux_labs@wux-labs-vm:~$ 

image-20230216002134083

从这个日志中可以看出,TiDB的启动顺序为:

image-20230216003638244

验证集群启动

使用命令验证

启动完成后,再次查看一下集群的状态。

image-20230216002449302

此时,TiDB的实例都启动了。

通过Dashboard查看

根据上面的结果,TiDB的Dashboard的地址为:http://wux-labs-vm:23792/dashboard,直接在浏览器中打开即可。

image-20230216003205490

输入用户名和密码登录后可打开Dashboard界面。

image-20230216003258599

这里面可以实现TiDB数据库集群的监控。

通过Grafana查看

根据上面的结果,TiDB的Grafana的地址为:http://wux-labs-vm:3000,直接在浏览器中打开即可。

image-20230216003554652

输入用户名和密码可以进入到监控界面。

image-20230216003939495

通过Prometheus查看

除了上面几种,还可以直接查看Prometheus的监控数据,地址是http://wux-labs-vm:9090/,直接浏览器打开即可。

image-20230216004155313

写在后面

TiDB是分布式K-V数据库,对集群节点要求比较多,如果每个实例都分别在一台服务器上,则至少需要十几台服务器。单机环境仅作为模拟分布式集群用,不能用于生产环境。

  • 9
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 13
    评论
要在Docker中安装TiDB数据库,您需要按照以下步骤进行操作: 1. 首先,确保您已经安装了Docker和Docker Compose。您可以通过运行以下命令来验证它们是否已正确安装: ```bash docker --version docker-compose --version ``` 2. 创建一个名为`tidb-docker-compose.yml`的文件,并将以下内容复制到文件中: ```yaml version: '3' services: pd: image: pingcap/pd:latest container_name: pd ports: - "2379:2379" command: - --name=pd1 - --client-urls=http://0.0.0.0:2379 - --peer-urls=http://0.0.0.0:2380 - --advertise-client-urls=http://pd:2379 - --advertise-peer-urls=http://pd:2380 - --initial-cluster=pd1=http://pd:2380 - --data-dir=/tmp/pd volumes: - ./data/pd1:/tmp/pd tikv: image: pingcap/tikv:latest container_name: tikv ports: - "20160:20160" command: - --addr=0.0.0.0:20160 - --advertise-addr=tikv:20160 - --data-dir=/tmp/tikv - --pd=pd:2379 volumes: - ./data/tikv:/tmp/tikv tidb: image: pingcap/tidb:latest container_name: tidb ports: - "4000:4000" command: - --store=tikv - --path=127.0.0.1:2379 depends_on: - tikv - pd ``` 3. 在同一目录下创建一个名为`data`的文件夹,用于存储数据库的数据文件。您可以通过运行以下命令来创建该文件夹: ```bash mkdir data ``` 4. 打开终端,进入存储了上述`tidb-docker-compose.yml`文件的目录。然后运行以下命令来启动TiDB数据库: ```bash docker-compose -f tidb-docker-compose.yml up -d ``` 这将启动一个包含了PD(Placement Driver)、TiKV(分布式存储引擎)和TiDB(分布式数据库)的Docker容器。 5. 等待一段时间,直到所有容器都成功启动。您可以通过运行以下命令来检查容器的状态: ```bash docker ps ``` 如果所有容器的状态都是“Up”(运行中),则TiDB数据库已成功安装。 现在,您可以通过在浏览器中访问`http://localhost:4000`来使用TiDB的Web管理界面进行进一步的配置和管理。 希望这可以帮助到您!如有任何问题,请随时提问。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 13
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

wux_labs

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值