K8s 部署Mysql主从集群
1. 创建 namespace.yaml 文件
apiVersion: v1
kind: Namespace
metadata:
name: deploy-test
spec: {}
status: {}
2. 创建 namespace
kubectl create -f namspace.yaml
查看是否出创建成功
kubectl get ns
3. 创建 Mysql 密码的 Secret
(1)执行如下命令
kubectl create secret generic mysql-passowrd --namespace=deploy-test --from-literal=mysql_root_password=root --dry-run=client -o=yaml
说明:
- 创建一个 secret
- 名字为 mysql-password
- 命名空间为 deploy-test
- –from-literal=mysql_root_password=root 后面的root为密码
- –dry-run 不执行,只是校验
(2)生成资源清单文件,并保存为 mysql_root_password_secret.yaml
apiVersion: v1
data:
mysql_root_password: cm9vdA==
kind: Secret
metadata:
creationTimestamp: null
name: mysql-passowrd
namespace: deploy-test
(3)创建 secret
kubectl create -f mysql_root_password_secret.yaml
(4)查看 secret
kubectl get secret -n deploy-test
4. 安装 MySQL 主节点
(1)创建 PV 和 PVC
之前安装了 nfs,现在可以给哪些目录创建 PV 和 PVC。创建 PV 和 PVC 的资源清单文件
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: deploy-mysql-master-ceph-pv
namespace: deploy-test
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: rook-ceph-block
volumeMode: Filesystem
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: deploy-mysql-master-ceph-pvc
namespace: deploy-test
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: rook-ceph-block
volumeMode: Filesystem
注意:不推荐在ceph rbd 模式下使用 RWX 访问控制,如果应用层没有访问锁机制,可能会造成数据损坏,这是推荐使用RWO访问控制,即ReadWriteOnce
(2)查看 PV 和 PVC
kubectl get pvc -n deploy-test
(3)主节点配置文件 my.cnf
[mysqld]
skip-host-cache
skip-name-resolve
datadir = /var/lib/mysql
socket = /var/run/mysqld/mysqld.sock
secure-file-priv = /var/lib/mysql-files
pid-file = /var/run/mysqld/mysqld.pid
user = mysql
secure-file-priv = NULL
server-id = 1
log-bin = master-bin
log_bin_index = master-bin.index
binlog_do_db = deploy_test
binlog_ignore_db = information_sechema
binlog_ignore_db = mysql
binlog_ignore_db = performance_schema
binlog_ignore_db = sys
binlog-format = row
[client]
socket = /var/run/mysqld/mysqld.sock
!includedir /etc/mysql/conf.d/
(4)接下来将创建一个 ConfigMap 来存储这个配置文件,可以使用以下配置生成 yaml 资源清单文件内容
kubectl create configmap mysql-master-cm -n deploy-test --from-file=my.cnf --dry-run=client -o yaml
生成的 ConfigMap 文件清单如下
apiVersion: v1
data:
my.cnf: |-
[mysqld]
skip-host-cache
skip-name-resolve
datadir = /var/lib/mysql
socket = /var/run/mysqld/mysqld.sock
secure-file-priv = /var/lib/mysql-files
pid-file = /var/run/mysqld/mysqld.pid
user = mysql
secure-file-priv = NULL
server-id = 1
log-bin = master-bin
log-bin-index = master-bin.index
binlog_do_db = deploy_test
binlog_ignore_db = information_sechema
binlog_ignore_db = mysql
binlog_ignore_db = performance_schema
binlog_ignore_db = sys
binlog-format = row
[client]
socket = /var/run/mysqld/mysqld.sock
!includedir /etc/mysql/conf.d/
kind: ConfigMap
metadata:
creationTimestamp: null
name: mysql-master-cm
namespace: deploy-test
5. 部署 MySQL 主节点
(1)直接上msyql 主节点的yaml 资源清单文件:
apiVersion: v1
data:
my.cnf: |-
[mysqld]
skip-host-cache
skip-name-resolve
datadir = /var/lib/mysql
socket = /var/run/mysqld/mysqld.sock
secure-file-priv = /var/lib/mysql-files
pid-file = /var/run/mysqld/mysqld.pid
user = mysql
secure-file-priv = NULL
server-id = 1
log-bin = master-bin
log-bin-index = master-bin.index
binlog_do_db = deploy_test
binlog_ignore_db = information_sechema
binlog_ignore_db = mysql
binlog_ignore_db = performance_schema
binlog_ignore_db = sys
binlog-format = row
[client]
socket = /var/run/mysqld/mysqld.sock
!includedir /etc/mysql/conf.d/
kind: ConfigMap
metadata:
creationTimestamp: null
name: mysql-master-cm
namespace: deploy-test
---
apiVersion: v1
kind: Service
metadata:
name: deploy-mysql-master-svc
namespace: deploy-test
labels:
app: mysql-master
spec:
ports:
- port: 3306
name: mysql
targetPort: 3306
nodePort: 30306
selector:
app: mysql-master
type: NodePort
sessionAffinity: ClientIP
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: deploy-mysql-master
namespace: deploy-test
spec:
selector:
matchLabels:
app: mysql-master
serviceName: "deploy-mysql-master-svc"
replicas: 1
template:
metadata:
labels:
app: mysql-master
spec:
terminationGracePeriodSeconds: 10
containers:
- args:
- --character-set-server=utf8mb4
- --collation-server=utf8mb4_unicode_ci
- --lower_case_table_names=1
- --default-time_zone=+8:00
name: mysql
# image: docker.io/library/mysql:8.0.34
image: registry.cn-shenzhen.aliyuncs.com/xiaohh-docker/mysql:8.0.34
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-data
mountPath: /var/lib/mysql
- name: mysql-conf
mountPath: /etc/my.cnf
readOnly: true
subPath: my.cnf
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
key: mysql_root_password
name: mysql-password
volumes:
- name: mysql-data
persistentVolumeClaim:
claimName: deploy-mysql-master-ceph-pvc
- name: mysql-conf
configMap:
name: mysql-master-cm
items:
- key: my.cnf
mode: 0644
path: my.cnf
(2)创建主节点
kubectl create -f mysql-master.yaml
(3)查看创建情况
kubectl get all -o wide -n deploy-test
(4)进入容器内查看
kubectl exec -itn deploy-test pod/deploy-mysql-master-0 -- mysql -uroot -proot
(5)查看 master 节点信息
show master status;
6. 安装第一个从节点 Slave
PV 和 PVC 的yaml资源清单文件
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: deploy-mysql-slave-01-ceph-pv
namespace: deploy-test
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: rook-ceph-block
volumeMode: Filesystem
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: deploy-mysql-slave-01-ceph-pvc
namespace: deploy-test
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: rook-ceph-block
volumeMode: Filesystem
(1)第一个从节点的配置文件 my.cnf
[mysqld]
skip-host-cache
skip-name-resolve
datadir = /var/lib/mysql
socket = /var/run/mysqld/mysqld.sock
secure-file-priv = /var/lib/mysql-files
pid-file = /var/run/mysqld/mysqld.pid
user = mysql
secure-file-priv = NULL
server-id = 2
log-bin = slave-bin
relay-log = slave-relay-bin
log-bin-index = slave-relay-bin.index
[client]
socket = /var/run/mysqld/mysqld.sock
!includedir /etc/mysql/conf.d/
(2)接下来将创建一个 ConfigMap 来存储这个配置文件,可以使用以下配置生成 yaml 资源清单文件内容
kubectl create configmap mysql-slave-cm -n deploy-test --from-file=my.cnf --dry-run=client -o yaml
生成的 ConfigMap 文件清单如下
apiVersion: v1
data:
my.cnf: |
[mysqld]
skip-host-cache
skip-name-resolve
datadir = /var/lib/mysql
socket = /var/run/mysqld/mysqld.sock
secure-file-priv = /var/lib/mysql-files
pid-file = /var/run/mysqld/mysqld.pid
user = mysql
secure-file-priv = NULL
server-id = 2
log-bin = slave-bin
relay-log = slave-relay-bin
log-bin-index = slave-relay-bin.index
[client]
socket = /var/run/mysqld/mysqld.sock
!includedir /etc/mysql/conf.d/
kind: ConfigMap
metadata:
creationTimestamp: null
name: mysql-slave-cm
namespace: deploy-test
(3)第一个Slave节点yaml资源清单文件
apiVersion: v1
data:
my.cnf: |
[mysqld]
skip-host-cache
skip-name-resolve
datadir = /var/lib/mysql
socket = /var/run/mysqld/mysqld.sock
secure-file-priv = /var/lib/mysql-files
pid-file = /var/run/mysqld/mysqld.pid
user = mysql
secure-file-priv = NULL
server-id = 2
log-bin = slave-bin
relay-log = slave-relay-bin
log-bin-index = slave-relay-bin.index
[client]
socket = /var/run/mysqld/mysqld.sock
!includedir /etc/mysql/conf.d/
kind: ConfigMap
metadata:
creationTimestamp: null
name: mysql-slave-01-cm
namespace: deploy-test
---
apiVersion: v1
kind: Service
metadata:
name: deploy-mysql-slave-svc
namespace: deploy-test
labels:
app: mysql-slave
spec:
ports:
- port: 3306
name: mysql
targetPort: 3306
nodePort: 30308
selector:
app: mysql-slave
type: NodePort
sessionAffinity: ClientIP
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: deploy-mysql-slave-01
namespace: deploy-test
spec:
selector:
matchLabels:
app: mysql-slave
serviceName: "deploy-mysql-slave-svc"
replicas: 1
template:
metadata:
labels:
app: mysql-slave
spec:
terminationGracePeriodSeconds: 10
containers:
- args:
- --character-set-server=utf8mb4
- --collation-server=utf8mb4_unicode_ci
- --lower_case_table_names=1
- --default-time_zone=+8:00
name: mysql
# image: docker.io/library/mysql:8.0.34
image: registry.cn-shenzhen.aliyuncs.com/xiaohh-docker/mysql:8.0.34
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-data
mountPath: /var/lib/mysql
- name: mysql-conf
mountPath: /etc/my.cnf
readOnly: true
subPath: my.cnf
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
key: mysql_root_password
name: mysql-password
volumes:
- name: mysql-data
persistentVolumeClaim:
claimName: deploy-mysql-slave-01-ceph-pvc
- name: mysql-conf
configMap:
name: mysql-slave-01-cm
items:
- key: my.cnf
mode: 0644
path: my.cnf
注意:这里的 Service 会和第二个 Slave 节点公用
(4)查看创建情况
kubectl get all -n deploy-test
7. 创建第二个Slave节点
与创建第一个节点一样,不同点:
- Service 不用创建
- slave-01 修改为 slave-02 即可
- serverId 修改为 3
查看一主两从节点情况
8. 测试
(1)查看 master 节点信息
要同步的数据库是 deploy_test
(2)使用如下命令,进入第一个mysql 从节点 salve
kubectl exec -itn deploy-test pod/deploy-msyql-slave-01-0 -- mysql -uroot -proot
(3)接下来分别来到两个子节点当中执行如下命令
change master to master_host='deploy-mysql-master-0.deploy-mysql-master-svc.deploy-test.svc.cluster.local',
master_port=3306, master_user='root', master_password='root', master_log_file='master-bin.000003',
master_log_pos=157,master_connect_retry=30,get_master_public_key=1;
需要注意下面的几个参数:
- master_host:这个参数是master的地址,kubernetes 提供的解析规则是 pod 名称.service名称.命名空间.svc.cluster.local,所以 master 的 mysql 地址是 deploy-mysql-master-0.deploy-mysql-master-svc.deploy-test.svc.cluster.local
- master_port:主节点的 MySQL 端口,没改默认 3306
- master_user:登录到主节点的 mysql 用户
- master_password:登录到主节点的用户密码
- master_log_file:之前查看 mysql 主节点状态时的 file 字段
- master_log_pos:之前查看 mysql 主节点状态时的 Position 字段
- master_connect-retry:主节点重连时间
- get_master_public_key:连接主 mysql 的公钥获取方式
以上参数,可以按照自己环境进行修改。
(4)查看 master_host 的对应关系
安装 bind-utils
yum install -y bind-utils
查看命名空间 deploy-test 下的pod
kubectl get pod -n deploy-test -o wide
查看命名空间 kube-system 下的 svc 中的 dns 的容器内 IP
kubectl get svc -n kube-system
解析绑定关系
nslookup deploy-mysql-master-0.deploy-mysql-master-svc.deploy-test.svc.cluster.local 10.233.0.3
(5)启动 slave
start slave;
(6)查看 slave 状态
show slave status\G;
9. 测试主从集群
(1)在 master 节点创建 database
create database deploy_test;
(2)在 master 节点创建 user 表
CREATE TABLE user
(userId int,
userName varchar(255));
(3)在 master 节点插入一条 SQL 记录
insert into user values(1, "John");
(4)查看 slave 节点数据是否同步
数据库同步成功
表及数据同步成功