目标
将sc的所有组件和微服务部署到k8s环境上。
部署环境
k8s环境
Host Name | Role | IP |
---|---|---|
master1 | k8s-master01/etcd | 192.168.200.87 |
master2 | k8s-master02/etcd | 192.168.200.206 |
master3 | k8s-master03/etcd | 192.168.200.209 |
node1 | k8s-node01 | 192.168.200.11 |
node2 | k8s-node02 | 192.168.200.12 |
node3 | k8s-node03 | 192.168.200.205 |
Spring Cloud部署架构
在这里插入图片描述
- apollo部署在物理机上;
- MySQL集群部署成headless service;
- Eureka采用hostNetwork=true的方式,固定宿主机IP和端口;
- 业务系统的前端部署在Ngnix上,访问gateway后端时,需要配置gateway后端的service name+port。
- Gateway而管理前端需要配置gateway后端的URL,所以后端采用Ingress的方式暴露服务,在gateway前端的/etc/hosts上配置后端域名和主机的映射关系;同时Gateway前端也配置成ingress。最后在用户访问的主机上配置Gateway前后段的域名和ip的映射关系;
- 开发环境可以在/etc/hosts中配置域名的映射关系或者开发环境采用Nodeport的形式暴露服务,但需要维护port。
配置动态PVC
-
Step 1: Get connection information for your NFS server. Make sure your NFS server is accessible from your Kubernetes cluster and get the information you need to connect to it. At a minimum you will need its hostname.
-
Step 2: Get the NFS-Client Provisioner files. To setup the provisioner you will download a set of YAML files, edit them to add your NFS server’s connection information and then apply each with the kubectl / oc command.
Get all of the files in the deploy directory of this repository. These instructions assume that you have cloned the external-storage repository and have a bash-shell open in the nfs-client directory.
- Step 3: Setup authorization. If your cluster has RBAC enabled or you are running OpenShift you must authorize the provisioner. If you are in a namespace/project other than “default” edit deploy/rbac.yaml.
Kubernetes:
# Set the subject of the RBAC objects to the current namespace where the provisioner is being deployed
$ NS=$(kubectl config get-contexts|grep -e "^\*" |awk '{print $5}')
$ NAMESPACE=${NS:-default}
$ sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/rbac.yaml
$ kubectl create -f deploy/rbac.yaml
- Step 4: Configure the NFS-Client provisioner
Note: To deploy to an ARM-based environment, use: deploy/deployment-arm.yaml instead, otherwise use deploy/deployment.yaml.
Next you must edit the provisioner’s deployment file to add connection information for your NFS server. Edit deploy/deployment.yaml and replace the two occurences of with your server’s hostname.
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/nfs
- name: NFS_SERVER
value: <YOUR NFS SERVER HOSTNAME>
- name: NFS_PATH
value: /var/nfs
volumes:
- name: nfs-client-root
nfs:
server: <YOUR NFS SERVER HOSTNAME>
path: /var/nfs
可以修改PROVISIONER_NAME为能够描述nfs存储的名字,比如nfs-storage。设置NFS服务器为192.168.200.13,路径为/nfs_data. 其中PROVISIONER_NAME要和deploy/class.yaml的Storage Class定义保持一致:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: fuseim.pri/nfs # or choose another name, must match deployment's env PROVISIONER_NAME' 注意不要和静态nfs的重名,可以通过kubectl get storageclass查看。
parameters:
archiveOnDelete: "false" # When set to "false" your PVs will not be archived
# by the provisioner upon deletion of the PVC.
[root@k8s-master01 nfs-client]# kubectl apply -f ./deploy/deployment.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
serviceaccount/nfs-client-provisioner configured
deployment.extensions/nfs-client-provisioner created
[root@k8s-master01 nfs-client]# kubectl apply -f ./deploy/class.yaml
storageclass.storage.k8s.io/managed-nfs-storage created
部署步骤
MySQL高可用集群
参考:https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/
拉取镜像:
docker pull ist0ne/xtrabackup
docker tag ist0ne/xtrabackup:latest gcr.io/google-samples/xtrabackup:1.0
[root@k8s-master01 mysql]# vim mysql-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql
labels:
app: mysql
data:
master.cnf: |
[mysqld]
log-bin
log_bin_trust_function_creators=1
lower_case_table_names=1
slave.cnf: |
[mysqld]
super-read-only
log_bin_trust_function_creators=1
[root@k8s-master01 mysql]# kubectl apply -f mysql-configmap.yaml
configmap/mysql created
创建mysql service:
# Headless service for stable DNS entries of StatefulSet members.
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
ports:
- name: mysql
port: 3306
clusterIP: None
selector:
app: mysql
---
# Client service for connecting to any MySQL instance for reads.
# For writes, you must instead connect to the master: mysql-0.mysql.
apiVersion: v1
kind: Service
metadata:
name: mysql-read
labels:
app: mysql
spec:
ports:
- name: mysql
port: 3306
selector:
app: mysql
因为后续的statefulset中的 mysql-1/2的headless service pod需要从mysql-0的pod同步数据。
[root@k8s-master01 mysql]# vim mysql-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
serviceName: mysql
replicas: 2
template:
metadata:
labels:
app: mysql
spec:
initContainers:
- name: init-mysql
image: mysql:5.7
command:
- bash
- "-c"
- |
set -ex
# Generate mysql server-id from pod ordinal index.
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
echo [mysqld] > /mnt/conf.d/server-id.cnf
# Add an offset to avoid reserved server-id=0 value.
echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
# Copy appropriate conf.d files from config-map to emptyDir.
if [[ $ordinal -eq 0 ]]; then
cp /mnt/config-map/master.cnf /mnt/conf.d/
else
cp /mnt/config-map/slave.cnf /mnt/conf.d/
fi
volumeMounts:
- name: conf
mountPath: /mnt/conf.d
- name: config-map
mountPath: /mnt/config-map
- name: clone-mysql
image: gcr.io/google-samples/xtrabackup:1.0
command:
- bash
- "-c"
- |
set -ex
# Skip the clone if data already exists.
[[ -d /var/lib/mysql/mysql ]] && exit 0
# Skip the clone on master (ordinal index 0).
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
[[ $ordinal -eq 0 ]] && exit 0
# Clone data from previous peer.
ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
# Prepare the backup.
xtrabackup --prepare --target-dir=/var/lib/mysql
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ALLOW_EMPTY_PASSWORD
value: "1"
ports:
- name: mysql
containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 500m
memory: 1Gi
livenessProbe:
exec:
command: ["mysqladmin", "ping"]
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
exec:
# Check we can execute queries over TCP (skip-networking is off).
command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
initialDelaySeconds: 5
periodSeconds: 2
timeoutSeconds: 1
- name: xtrabackup
image: gcr.io/google-samples/xtrabackup:1.0
ports:
- name: xtrabackup
containerPort: 3307
command:
- bash
- "-c"
- |
set -ex
cd /var/lib/mysql
# Determine binlog position of cloned data, if any.
if [[ -f xtrabackup_slave_info ]]; then
# XtraBackup already generated a partial "CHANGE MASTER TO" query
# because we're cloning from an existing slave.
mv xtrabackup_slave_info change_master_to.sql.in
# Ignore xtrabackup_binlog_info in this case (it's useless).
rm -f xtrabackup_binlog_info
elif [[ -f xtrabackup_binlog_info ]]; then
# We're cloning directly from master. Parse binlog position.
[[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
rm xtrabackup_binlog_info
echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
fi
# Check if we need to complete a clone by starting replication.
if [[ -f change_master_to.sql.in ]]; then
echo "Waiting for mysqld to be ready (accepting connections)"
until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
echo "Initializing replication from clone position"
# In case of container restart, attempt this at-most-once.
mv change_master_to.sql.in change_master_to.sql.orig
mysql -h 127.0.0.1 <<EOF
$(<change_master_to.sql.orig),
MASTER_HOST='mysql-0.mysql',
MASTER_USER='root',
MASTER_PASSWORD='',
MASTER_CONNECT_RETRY=100;
START SLAVE;
EOF
fi
# Start a server to send backups when requested by peers.
exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
"xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 100m
memory: 100Mi
volumes:
- name: conf
emptyDir: {}
- name: config-map
configMap:
name: mysql
volumeClaimTemplates:
- metadata:
name: data
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
参考:
https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/exposing-tcp-udp-services.md
# 部署nginx-ingress-controller相关的服务帐户、集群角色、集群角色绑定、Deployment、ConfigMap
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
# 暴露某些端口
$ cat nginx-ingress-service.yml
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: mysql
port: 3306
targetPort: 3306
# 用上述描述文件部署
$ kubectl apply -f nginx-ingress-service.yml
# 等一会儿后,重启Docker for macOS后,应该有进程监听3306端口了
$ lsof -i :3306
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
com.docke 36484 jeremy 37u IPv4 0xe746861636421a57 0t0 TCP *:mysql (LISTEN)
com.docke 36484 jeremy 39u IPv6 0xe7468616205d110f 0t0 TCP localhost:mysql (LISTEN)
# 然后创建tcp服务相关的ConfigMap,其中mysql是mysql服务的名称,如要反向代理其它tcp服务,相应地修改data里的定义
$ cat nginx-tcp-configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-tcp-configmap
namespace: kube-system
data:
"3306": default/mysql:3306
# 最后修改nginx-ingress-controller运行时的参数,指定tcp服务反向代理的configmap,添加--tcp-services-configmap=kube-system/nginx-tcp-configmap启动参数
kubectl edit deployment nginx-ingress-controller
这时在本机就可以访问mysql服务了:
mysql -uroot -p -h127.0.0.1 -P3306
至此,无论是http协议还是tcp协议的服务,都可以很方便地暴露给外部使用了。
2019-06-13T02:02:13.801814Z 0 [Warning] No argument was provided to --log-bin, and --log-bin-index was not used; so replication may break when this MySQL server ac