1.k8s作用
2.架构
3.主节点架构
3.2kube-apiserver
3.3etcd
3.4kube-scheduler
3.5kube-controller-manager :在主节点上运行控制器的组件
4.node节点架构
4.1kubelet
4.2kube-proxy
4.3容器运行环境(Container Runtime)
4.4fluentd
5.概念
5.1Container:容器,可以是 docker 启动的一个容器
5.2Pod:
5.3Volume:
5.4Controllers:更高层次对象,部署和管理 Pod;
5.5Deployment:
5.6Service
支持多种方式(ClusterIP、NodePort、LoadBalance)
5.7Label:标签,用于对象资源的查询,筛选
5.8Namespace:命名空间,逻辑隔离
不同 namespace 可以资源名重复
5.9API:
6.部署安装
6.1环境准备
Vagrant.configure("2") do |config|
(1..3).each do |i|
config.vm.define "k8s-node#{i}" do |node|
# 设置虚拟机的Box
node.vm.box = "generic/centos8"
# 设置虚拟机的主机名
node.vm.hostname="k8s-node#{i}"
# 设置虚拟机的IP
node.vm.network "private_network", ip: "192.168.73.#{99+i}", netmask: "255.255.255.0"
# 设置主机与虚拟机的共享目录
# node.vm.synced_folder "~/Documents/vagrant/share", "/home/vagrant/share"
# VirtaulBox相关配置
node.vm.provider "virtualbox" do |v|
# 设置虚拟机的名称
v.name = "k8s-node#{i}"
# 设置虚拟机的内存大小
v.memory = 4096
# 设置虚拟机的CPU个数
v.cpus = 4
end
end
end
end
进入三个虚拟机,开启 root 的密码访问权限。
Vagrant ssh XXX 进去系统之后
设置root密码
sudo passwd root
之后切换root登录
su root 密码为 vagrant
vi /etc/ssh/sshd_config
修改 PasswordAuthentication yes/no
重启服务 service sshd restart
创建nat网络
6.2linux环境(所有节点都更改)
1.关闭防火墙:
systemctl stop firewalld
systemctl disable firewalld
2.关闭 selinux:
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
3.关闭 swap:
swapoff -a 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab 永久
free -g 验证,swap 必须为 0;
4.添加主机名与 IP 对应关系
vi /etc/hosts
10.0.2.15 k8s-node1
10.0.2.4 k8s-node2
10.0.2.5 k8s-node3
可以查看主机地址,并且更改主机地址
hostnamectl set-hostname <newhostname>:指定新的 hostname
5.将桥接的 IPv4 流量传递到 iptables 的链:
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
6.疑难问题:
遇见提示是只读的文件系统,运行如下命令
mount -o remount rw /
7.date 查看时间 (可选)
yum install -y ntpdate
ntpdate time.windows.com 同步最新时间
或
timedatectl set-timezone Asia/Shanghai
7.所有节点安装 Docker、kubeadm、kubelet、kubectl
7.1安装docker
1.卸载之前的docker
sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
设置docker的repo的yum位置
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y docker-ce docker-ce-cli containerd.io
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://vqsuz7li.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
4、启动 docker & 设置 docker 开机自启
systemctl enable docker
7.2.添加kubernetes阿里云 yum 源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
#两者要在同一行
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3
开机自启动
systemctl enable kubelet
systemctl start kubelet
8.部署 k8s-master
1.先下载好master节点需要的镜像
防止master初始化时,拉取镜像失败.执行./master_images.sh
#!/bin/bash
images=(
kube-apiserver:v1.17.3
kube-proxy:v1.17.3
kube-controller-manager:v1.17.3
kube-scheduler:v1.17.3
coredns:1.6.5
etcd:3.4.3-0
pause:3.1
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
done
2 .master初始化
kubeadm init --apiserver-advertise-address=10.0.2.15 --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers --kubernetes-version v1.17.3 --service-cidr=10.96.0.0/16 --pod-network-cidr=10.244.0.0/16
初始化之后,配置master的config
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
保存加入集群的令牌等
查看节点状态master为notready状态,此时需要pod网络插件
3.安装 Pod 网络插件flannel
kubectl apply -f kube-flannel.yml
kube-flannel.yml文件如下
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
defaultAddCapabilities: []
requiredDropCapabilities: []
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
seLinux:
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni-plugin
image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
command:
- cp
args:
- -f
- /flannel
- /opt/cni/bin/flannel
volumeMounts:
- name: cni-plugin
mountPath: /opt/cni/bin
- name: install-cni
image: rancher/mirrored-flannelcni-flannel:v0.18.1
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: rancher/mirrored-flannelcni-flannel:v0.18.1
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: EVENT_QUEUE_DEPTH
value: "5000"
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: xtables-lock
mountPath: /run/xtables.lock
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni-plugin
hostPath:
path: /opt/cni/bin
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
查看所有的pods
kubectl get ns
kubectl get pods -n kube-system
kubectl get pods --all-namespaces
得到flannel已经运行
查看master节点已经准备就绪。
kubectl get nodes
4.其他节点执行以下命令,加入集群
kubeadm join 10.0.2.15:6443 --token 62txdy.un3bvhi3s71i3nfg \
--discovery-token-ca-cert-hash sha256:86f8f744cfad893f515bcce8e48ad86a5c3fadf3d1e11156e62b5f099edf3cdf
token 过期怎么办
kubeadm token create --print-join-command
kubeadm token create --ttl 0 --print-join-command
查看进度
watch kubectl get pod -n kube-system -o wide
9.k8s入门
1.创建一个tomcat
kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8
Kubectl get pods -o wide 获取pod的信息
kubectl get all -o wide 获取部署的所有信息
2.暴漏访问
port为service端口,target-port为pod端口。type为暴漏类型
kubectl expose deployment tomcat6 --port=80 --target-port=8080 --type=NodePort
访问32067端口就能查看tomcat
3.动态扩容与缩容
kubectl scale --replicas=3 deployment tomcat6
查看所有部署
kubectl get deployment
可以看到有几个tomcat6可用
执行 kubectl get all -o wide
可以看到3个pod,一个service
4.删除
先删除部署
kubectl delete deployment.apps/tomcat6
再删除service
kubectl delete service/tomcat6
此时,发现pod,service,deployment都没了。
5.yaml
生成yaml文件
kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run -o yaml > tomcat.yaml
所有的kubenetes资源都可以总结成yaml
最简单的一个pod
kubectl 文档
https://kubernetes.io/zh/docs/reference/kubectl/overview/
资源类型
6.pod,service是什么
348、k8s-入门-Pod、Service等概念_哔哩哔哩_bilibilihttps://www.bilibili.com/video/BV1np4y1C7Yf?p=349&vd_source=aecee9bb7e209c3944063719f9e6a3bc
7.通过yaml方式创建资源
ingress根据规则转发到特定service,service负载均衡到一个pod上。
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: tomcat6
name: tomcat6
spec:
replicas: 3
selector:
matchLabels:
app: tomcat6
template:
metadata:
labels:
app: tomcat6
spec:
containers:
- image: tomcat:6.0.53-jre8
name: tomcat
---
apiVersion: v1
kind: Service
metadata:
labels:
app: tomcat6
name: tomcat6
spec:
ports:
- port: 80 #pod端口
protocol: TCP
targetPort: 8080 #容器端口
selector:
app: tomcat6
type: NodePort
status:
loadBalancer: {}
创建ingress-nginx并且暴漏端口
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
hostNetwork: true
serviceAccountName: nginx-ingress-serviceaccount
containers:
- name: nginx-ingress-controller
image: siriuszg/nginx-ingress-controller:0.20.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
spec:
#type: NodePort
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
创建ingress规则
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: web
spec:
rules:
- host: tomcat6.atguigu.com
http:
paths:
- backend:
serviceName: tomcat6
servicePort: 80 #这里指的是pod端口
10.安装kubesphere
链接参考
面向云原生应用的容器混合云,支持 Kubernetes 多集群管理的 PaaS 容器云平台解决方案 | KubeSphere
10.1 准备工作
在kubenetes上安装kubesphere3.0.0,前提
1.查看版本信息
kubectl version
2.查看内存最小需求
free -g
3.检查是否有storageClass用于储存持久化信息
kubectl get sc
4.安装openebs
检查并清除master节点上的污点
kubectl describe node k8s-node1 |grep Taint
去除污点
kubectl taint nodes master node-role.kubernetes.io/master:NoSchedule-
安装
kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml
安装openebs之后,自动创建了storageClass
将openebs-hostpath设定为默认的storageClass
kubectl patch storageclass openebs-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
等待openebs的所有pod都运行成功,之后再安装kubesphere
kubectl get pod -n openebs
5.安装helm(使用新版本helm)
参考Helm2和Helm3的安装\卸载\常用命令_helm3.x卸载-CSDN博客
wget -O helm.tar.gz https://get.helm.sh/helm-v2.16.3-linux-amd64.tar.gz
tar -zxvf helm.tar.gz
sudo mv linux-amd64/helm /usr/local/bin/helm
helm
创建 helm-rbac.yaml,写入如下内容。kubectl apply -f helm-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
将tiller镜像拉到本地
docker pull jessestuart/tiller。
安装tiller
helm init --service-account=tiller --tiller-image=jessestuart/tiller
删除tiller
kubectl get -n kube-system secrets,sa,clusterrolebinding -o name|grep tiller|xargs kubectl -n kube-system delete
kubectl get all -n kube-system -l app=helm -o name|xargs kubectl delete -n kube-system
10.2安装kubesphere3.1.1
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.1.1/cluster-configuration.yaml
检查安装日志
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
安装成功显示
查看kubesphere的pod都运行成功
kubectl get pod --all-namespaces
kubectl get svc/ks-console -n kubesphere-system
通过 NodePort (IP:30880) 使用默认帐户和密码(admin/P@88w0rd) 访问 Web 控制台密123456Aa
登录之后创建用户
workspaces-manager 可以创建工作空间,可以将用户拉到指定的企业空间,规定企业空间的角色
workspace-self-provisionse是项目的创建者,可以拉取成员到项目中担任角色
10.3安装组件devops
进入cluster-configuration.yaml,将devops的enable设置为true.
vi cluster-configuration.yaml
devops:
enabled: true # Change "false" to "true"
kubectl apply -f kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml
10.4用kubesphere部署项目
参考
一个项目有多个应用,每个应用有多个服务,一个服务包含多个pod(这些pod一般是为了负载均衡),每个pod多个容器
10.5流水线devops操作
11.集群以及docker创建集群
11.1.mysql集群方式
一写两读
master1作为主节点, master1down机,master2接替master1工作,monitor工作在3台机器上,监控3台mysql的状况。
由mysql-router路由到对应的数据库
不同数据库不同职责
11.2.mysql主从复制与分库分表,读写分离
1.下载镜像
docker run -p 3307:3306 --name mysql-master-0 -v /mydata/mysql/master/log:/var/log/mysql -v /mydata/mysql/master/data:/var/lib/mysql -v /mydata/mysql/master/conf:/etc/mysql -e MYSQL_ROOT_PASSWORD=root -d mysql:5.7
-e MYSQL_ROOT_PASSWORD=root:初始化 root 用户的密码
2.修改 master 基本配置
vim /mydata/mysql/master/conf/my.cnf
[client]
default-character-set=utf8
[mysql]
default-character-set=utf8
[mysqld]
character_set_server = utf8
init_connect='SET collation_connection = utf8_unicode_ci' init_connect='SET NAMES utf8' character-set-server=utf8
collation-server=utf8_unicode_ci
skip-character-set-client-handshake
skip-name-resolve
注意:skip-name-resolve 一定要加,不然连接 mysql 会超级慢
添加 master 主从复制部分配置
server_id=1
log-bin=mysql-bin
read-only=0
binlog-do-db=gulimall_ums
binlog-do-db=gulimall_pms
binlog-do-db=gulimall_oms
binlog-do-db=gulimall_sms
binlog-do-db=gulimall_wms
binlog-do-db=gulimall_admin
replicate-ignore-db=mysql
replicate-ignore-db=sys
replicate-ignore-db=information_schema
replicate-ignore-db=performance_schema
重启 master
3.创建 Slave 实例并启动
docker run -p 3317:3306 --name mysql-slaver-0 -v /mydata/mysql/slaver/log:/var/log/mysql -v /mydata/mysql/slaver/data:/var/lib/mysql -v /mydata/mysql/slaver/conf:/etc/mysql -e MYSQL_ROOT_PASSWORD=root -d mysql:5.7
修改 slave 基本配置
vim /mydata/mysql/slaver/conf/my.cnf
[client]
default-character-set=utf8
[mysql]
default-character-set=utf8
[mysqld]
character_set_server = utf8
init_connect='SET collation_connection = utf8_unicode_ci' init_connect='SET NAMES utf8' character-set-server=utf8
collation-server=utf8_unicode_ci
skip-character-set-client-handshake
skip-name-resolve
添加 master 主从复制部分配置
server_id=2
log-bin=mysql-bin
read-only=1
binlog-do-db=gulimall_ums
binlog-do-db=gulimall_pms
binlog-do-db=gulimall_oms
binlog-do-db=gulimall_sms
binlog-do-db=gulimall_wms
binlog-do-db=gulimall_admin
replicate-ignore-db=mysql
replicate-ignore-db=sys
replicate-ignore-db=information_schema
replicate-ignore-db=performance_schema
重启 slaver
4、为 master 授权用户来他的同步数据
1、进入 master 容器
docker exec -it mysql /bin/bash
2、进入 mysql 内部 (mysql –uroot -p)
1)、授权 root 可以远程访问( 主从无关,为了方便我们远程连接 mysql)
grant all privileges on *.* to 'root'@'%' identified by 'root' with grant option;
flush privileges;
2)、添加用来同步的用户
GRANT REPLICATION SLAVE ON *.* to 'backup'@'%' identified by '123456';
FLUSH PRIVILEGES;
mysql5.7有出现从机连接不上主机的情况,改为root用户即可。
3、查看 master 状态
show master status\G;
5、配置 slaver 同步 master 数据
1、进入 slaver 容器
docker exec -it mysql-slaver-01 /bin/bash
2、进入 mysql 内部(mysql –uroot -p)
1)、授权 root 可以远程访问( 主从无关,为了方便我们远程连接 mysql)
grant all privileges on *.* to 'root'@'%' identified by 'root' with grant option;
flush privileges;
2)、设置主库连接
change master to
master_host='mysql-master0.venusmall',master_user='root',master_password='root',
master_log_file='mysql-bin.000004',master_log_pos=0,master_port=3306;
3)、启动从库同步
start slave;
4)、查看从库状态
show slave status\G
总结:
1)、主从数据库在自己配置文件中声明需要同步哪个数据库,忽略哪个数据库等信息。 并且 server-id 不能一样
2)、主库授权某个账号密码来同步自己的数据
6.shardingproxy分库分表,读写分离
可以配置分库分表,与读写分离
Apache ShardingSpherehttps://shardingsphere.apache.org/分库分表
config-sharding.yaml
定义了两个不同组的数据源,根据某一列区分库,再根据某一列去分表
databaseName: sharding_db
#两个主库的数据源
dataSources:
ds_0:
url: jdbc:mysql://192.168.73.10:3307/demo_ds_0?serverTimezone=UTC&useSSL=false
username: root
password: root
connectionTimeoutMilliseconds: 30000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 50
minPoolSize: 1
ds_1:
url: jdbc:mysql://192.168.73.10:3307/demo_ds_1?serverTimezone=UTC&useSSL=false
username: root
password: root
connectionTimeoutMilliseconds: 30000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 50
minPoolSize: 1
rules:
- !SHARDING
tables:
t_order:
actualDataNodes: ds_${0..1}.t_order_${0..1}
tableStrategy:
standard:
shardingColumn: order_id
shardingAlgorithmName: t_order_inline
keyGenerateStrategy:
column: order_id
keyGeneratorName: snowflake
auditStrategy:
auditorNames:
- sharding_key_required_auditor
allowHintDisable: true
t_order_item:
actualDataNodes: ds_${0..1}.t_order_item_${0..1}
tableStrategy:
standard:
shardingColumn: order_id
shardingAlgorithmName: t_order_item_inline
keyGenerateStrategy:
column: order_item_id
keyGeneratorName: snowflake
bindingTables:
- t_order,t_order_item
defaultDatabaseStrategy:
standard:
shardingColumn: user_id
shardingAlgorithmName: database_inline
defaultTableStrategy:
none:
defaultAuditStrategy:
auditorNames:
- sharding_key_required_auditor
allowHintDisable: true
#公式解释
shardingAlgorithms:
database_inline:
type: INLINE
props:
algorithm-expression: ds_${user_id % 2}
t_order_inline:
type: INLINE
props:
algorithm-expression: t_order_${order_id % 2}
t_order_item_inline:
type: INLINE
props:
algorithm-expression: t_order_item_${order_id % 2}
keyGenerators:
snowflake:
type: SNOWFLAKE
auditors:
sharding_key_required_auditor:
type: DML_SHARDING_CONDITIONS
读写分离,
读:指定了主库与隶属的从库,根据规则算出在哪个库,哪个表,然后查找该库隶属于哪个主从库,再根据轮询或其他算法选择一个库读取数据。
写:根据规则算出在哪个库,哪个表,然后查找该库隶属于哪个主从库,在所有库中,找到主库,写入数据。
databaseName: readwrite_splitting_db
dataSources:
write_ds_0:
url: jdbc:mysql://192.168.73.10:3307/demo_ds_0?serverTimezone=UTC&useSSL=false
username: root
password: root
connectionTimeoutMilliseconds: 30000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 50
minPoolSize: 1
read_ds_0:
url: jdbc:mysql://192.168.73.10:3317/demo_ds_0?serverTimezone=UTC&useSSL=false
username: root
password: root
connectionTimeoutMilliseconds: 30000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 50
minPoolSize: 1
# read_ds_1:
# url: jdbc:mysql://127.0.0.1:3306/demo_read_ds_1?serverTimezone=UTC&useSSL=false
# username: root
# password:
# connectionTimeoutMilliseconds: 30000
# idleTimeoutMilliseconds: 60000
# maxLifetimeMilliseconds: 1800000
# maxPoolSize: 50
# minPoolSize: 1
rules:
- !READWRITE_SPLITTING
dataSources:
readwrite_ds:
staticStrategy:
writeDataSourceName: write_ds_0
readDataSourceNames:
- read_ds_0
# - read_ds_1
loadBalancerName: random
loadBalancers:
random:
type: RANDOM
11.3.搭建redis集群
1.主机分槽型
这种集群的缺点
稳定性问题:槽位型的,一旦有主机与从机都宕机,会重新分配槽位,导致数据的不匹配问题。
2.一致性 hash
出现问题,影响范围小
Hash 倾斜问题
3.部署cluster-redis键槽型
1. 3 主 3 从方式,从为了同步备份,主进行 slot 数据分片
for port in $(seq 7001 7006); \
do \
mkdir -p /mydata/redis/node-${port}/conf
touch /mydata/redis/node-${port}/conf/redis.conf
cat << EOF >/mydata/redis/node-${port}/conf/redis.conf
port ${port}
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 192.168.56.10
cluster-announce-port ${port}
cluster-announce-bus-port 1${port}
appendonly yes
EOF
docker run -p ${port}:${port} -p 1${port}:1${port} --name redis-${port} \ -v /mydata/redis/node-${port}/data:/data \ -v /mydata/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf \ -d redis:5.0.7 redis-server /etc/redis/redis.conf; \
done
2. 创建集群
docker exec -it redis-7001 bash
redis-cli --cluster create 192.168.56.10:7001 192.168.56.10:7002 192.168.56.10:7003
192.168.56.10:7004 192.168.56.10:7005 192.168.56.10:7006 --cluster-replicas 1
代表有一台主机配备一台从机
3.测试
随便进入某个 redis 容器
docker exec -it redis-7002 /bin/bash
使用 redis-cli 的 cluster 方式进行连接
redis-cli -c -h 192.168.56.10 -p 7006
cluster info; 获取集群信息
cluster nodes;获取集群节点
Get/Set 命令测试,将会重定向
节点宕机,slave 会自动提升为 master,master 开启后变为 slave
11.4.搭建es集群
3-Master 节点创建
主节点只用来调度,不存储数据。
for port in $(seq 1 3); \
do \
mkdir -p /mydata/elasticsearch/master-${port}/config
mkdir -p /mydata/elasticsearch/master-${port}/data
chmod -R 777 /mydata/elasticsearch/master-${port}
cat << EOF >/mydata/elasticsearch/master-${port}/config/elasticsearch.yml
cluster.name: my-es #集群的名称,同一个集群该值必须设置成相同的
node.name: es-master-${port} #该节点的名字
node.master: true #该节点有机会成为 master 节点
node.data: false #该节点可以存储数据
network.host: 0.0.0.0
http.host: 0.0.0.0 #所有 http 均可访问
http.port: 920${port}
transport.tcp.port: 930${port}
#discovery.zen.minimum_master_nodes: 2 #设置这个参数来保证集群中的节点可以知道其
它 N 个有 master 资格的节点。官方推荐(N/2)+1
discovery.zen.ping_timeout: 10s #设置集群中自动发现其他节点时 ping 连接的超时时间
discovery.seed_hosts: ["172.18.12.21:9301", "172.18.12.22:9302", "172.18.12.23:9303"] #设置集
群中的 Master 节点的初始列表,可以通过这些节点来自动发现其他新加入集群的节点,es7
的新增配置
cluster.initial_master_nodes: ["172.18.12.21"] #新集群初始时的候选主节点,es7 的新增配置
EOF
docker run --name elasticsearch-node-${port} \ -p 920${port}:920${port} -p 930${port}:930${port} \ --network=mynet --ip 172.18.12.2${port}
-e ES_JAVA_OPTS="-Xms300m -Xmx300m" \ -v
/mydata/elasticsearch/master-${port}/config/elasticsearch.yml:/usr/share/elasticsearch/config/el
asticsearch.yml \ -v /mydata/elasticsearch/master-${port}/data:/usr/share/elasticsearch/data \ -v /mydata/elasticsearch/master-${port}/plugins:/usr/share/elasticsearch/plugins \ -d elasticsearch:7.4.2
done
3-Data-Node 创建
for port in $(seq 4 6); \
do \
mkdir -p /mydata/elasticsearch/node-${port}/config
mkdir -p /mydata/elasticsearch/node-${port}/data
chmod -R 777 /mydata/elasticsearch/node-${port}
cat << EOF >/mydata/elasticsearch/node-${port}/config/elasticsearch.yml
cluster.name: my-es #集群的名称,同一个集群该值必须设置成相同的
node.name: es-node-${port} #该节点的名字
node.master: false #该节点有机会成为 master 节点
node.data: true #该节点可以存储数据
network.host: 0.0.0.0
#network.publish_host: 192.168.56.10 #互相通信 ip,要设置为本机可被外界访问的 ip,否则
无法通信
http.host: 0.0.0.0 #所有 http 均可访问
http.port: 920${port}
transport.tcp.port: 930${port}
#discovery.zen.minimum_master_nodes: 2 #设置这个参数来保证集群中的节点可以知道其
它 N 个有 master 资格的节点。官方推荐(N/2)+1
discovery.zen.ping_timeout: 10s #设置集群中自动发现其他节点时 ping 连接的超时时间
discovery.seed_hosts: ["172.18.12.21:9301", "172.18.12.22:9302", "172.18.12.23:9303"] #设置集
群中的 Master 节点的初始列表,可以通过这些节点来自动发现其他新加入集群的节点,es7
的新增配置
cluster.initial_master_nodes: ["172.18.12.21"] #新集群初始时的候选主节点,es7 的新增配置
EOF
docker run --name elasticsearch-node-${port} \ -p 920${port}:920${port} -p 930${port}:930${port} \ --network=mynet --ip 172.18.12.2${port} \ -e ES_JAVA_OPTS="-Xms300m -Xmx300m" \ -v
/mydata/elasticsearch/node-${port}/config/elasticsearch.yml:/usr/share/elasticsearch/config/ela
sticsearch.yml \ -v /mydata/elasticsearch/node-${port}/data:/usr/share/elasticsearch/data \ -v /mydata/elasticsearch/node-${port}/plugins:/usr/share/elasticsearch/plugins \ -d elasticsearch:7.4.2
done
11.5.搭建rabbitmq集群
一台主机,作为消费者与publisher的交互机,其他slave同步数据,都作为备机。
1.创建容器
mkdir /mydata/rabbitmq
cd rabbitmq/
mkdir rabbitmq01 rabbitmq02 rabbitmq03
docker run -d --hostname rabbitmq01 --name rabbitmq01 -v
/mydata/rabbitmq/rabbitmq01:/var/lib/rabbitmq -p 15673:15672 -p 5673:5672 -e
RABBITMQ_ERLANG_COOKIE='atguigu' rabbitmq:management
docker run -d --hostname rabbitmq02 --name rabbitmq02 -v
/mydata/rabbitmq/rabbitmq02:/var/lib/rabbitmq -p 15674:15672 -p 5674:5672 -e
RABBITMQ_ERLANG_COOKIE='atguigu' --link rabbitmq01:rabbitmq01
rabbitmq:management
docker run -d --hostname rabbitmq03 --name rabbitmq03 -v
/mydata/rabbitmq/rabbitmq03:/var/lib/rabbitmq -p 15675:15672 -p 5675:5672 -e
RABBITMQ_ERLANG_COOKIE='atguigu' --link rabbitmq01:rabbitmq01 --link
rabbitmq02:rabbitmq02 rabbitmq:management
2.节点加入集群
docker exec -it rabbitmq01 /bin/bash
rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl start_app
Exit
进入第二个节点
docker exec -it rabbitmq02 /bin/bash
rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl join_cluster --ram rabbit@rabbitmq01
rabbitmqctl start_app
exit
进入第三个节点
docker exec -it rabbitmq03 bash
rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl join_cluster --ram rabbit@rabbitmq01
rabbitmqctl start_app
exit
3.实现镜像集群
docker exec -it rabbitmq01 bash
设置策略
rabbitmqctl set_policy -p / ha "^" '{"ha-mode":"all","ha-sync-mode":"automatic"}
查看 vhost/下面的所有 policy
rabbitmqctl list_policies -p /
在 cluster 中任意节点启用策略,策略会自动同步到集群节点
rabbitmqctl set_policy-p/ha-all"^"’{“ha-mode”:“all”}’
策略模式 all 即复制到所有节点,包含新增节点,策略正则表达式为 “^” 表示所有匹配所有队列名称。“^hello”表示只匹配名为 hello 开始的队列
4.测试