k8s

1:k8s集群的安装

1.1 k8s的架构

在这里插入图片描述

除了核心组件,还有一些推荐的Add-ons:

组件名称说明
kube-dns负责为整个集群提供DNS服务
Ingress Controller为服务提供外网入口
Heapster提供资源监控
Dashboard提供GUI
Federation提供跨可用区的集群
Fluentd-elasticsearch提供集群日志采集、存储与查询
1.2:修改IP地址、主机名和host解析

10.0.0.11 k8s-master
10.0.0.12 k8s-node-1
10.0.0.13 k8s-node-2
所有节点需要做hosts解析

1.3:master节点安装etcd
yum install etcd -y
​
vim /etc/etcd/etcd.conf
6行:ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
21行:ETCD_ADVERTISE_CLIENT_URLS="http://10.0.0.11:2379"
​
systemctl start etcd.service
systemctl enable etcd.service
​
etcdctl set testdir/testkey0 0
etcdctl get testdir/testkey0
​
etcdctl -C http://10.0.0.11:2379 cluster-health

etcd原生支持做集群,

1.4:master节点安装kubernetes
yum install kubernetes-master.x86_64 -y
​
vim /etc/kubernetes/apiserver 
8行:  KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
11行:KUBE_API_PORT="--port=8080"
14行: KUBELET_PORT="--kubelet-port=10250"
17行:KUBE_ETCD_SERVERS="--etcd-servers=http://10.0.0.11:2379"
23行:KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
​
vim /etc/kubernetes/config
22行:KUBE_MASTER="--master=http://10.0.0.11:8080"
​
systemctl enable kube-apiserver.service
systemctl restart kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl restart kube-controller-manager.service
systemctl enable kube-scheduler.service
systemctl restart kube-scheduler.service

检查服务是否安装正常

[root@k8s-master ~]# kubectl get componentstatus 
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"} 
1.5:node节点安装kubernetes
yum install kubernetes-node.x86_64 -y
​
vim /etc/kubernetes/config 
22行:KUBE_MASTER="--master=http://10.0.0.11:8080"
​
vim /etc/kubernetes/kubelet
5行:KUBELET_ADDRESS="--address=0.0.0.0"
8行:KUBELET_PORT="--port=10250"
11行:KUBELET_HOSTNAME="--hostname-override=10.0.0.12"
14行:KUBELET_API_SERVER="--api-servers=http://10.0.0.11:8080"
​
systemctl enable kubelet.service
systemctl restart kubelet.service
systemctl enable kube-proxy.service
systemctl restart kube-proxy.service

在master节点检查

[root@k8s-master ~]# kubectl get nodes
NAME        STATUS    AGE
10.0.0.12   Ready     6m
10.0.0.13   Ready     3s
1.6:所有节点配置flannel网络
yum install flannel -y
sed -i 's#http://127.0.0.1:2379#http://10.0.0.11:2379#g' /etc/sysconfig/flanneld
​
##master节点:
etcdctl mk /atomic.io/network/config   '{ "Network": "172.18.0.0/16" }'
​
yum install docker -y
systemctl enable flanneld.service 
systemctl restart flanneld.service 
systemctl  restart  docker
systemctl  enable  docker
systemctl restart kube-apiserver.service
systemctl restart kube-controller-manager.service
systemctl restart kube-scheduler.service
​
##node节点:
systemctl enable flanneld.service 
systemctl restart flanneld.service 
systemctl  restart  docker
systemctl restart kubelet.service
systemctl restart kube-proxy.service
​
vim /usr/lib/systemd/system/docker.service
#在[Service]区域下增加一行
ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT
systemctl daemon-reload 
systemctl restart docker
1.7:配置master为镜像仓库
#所有节点
​
vi /etc/docker/daemon.json
{
"registry-mirrors": ["https://registry.docker-cn.com"],
"insecure-registries": ["10.0.0.11:5000"]
}
​
systemctl restart docker
​
#master节点
docker run -d -p 5000:5000 --restart=always --name registry -v /opt/myregistry:/var/lib/registry  registry

2:什么是k8s,k8s有什么功能?

k8s是一个docker集群的管理工具

k8s是容器的编排工具

2.1 k8s的核心功能

自愈: 重新启动失败的容器,在节点不可用时,替换和重新调度节点上的容器,对用户定义的健康检查不响应的容器会被中止,并且在容器准备好服务之前不会把其向客户端广播。

弹性伸缩: 通过监控容器的cpu的负载值,如果这个平均高于80%,增加容器的数量,如果这个平均低于10%,减少容器的数量

服务的自动发现和负载均衡: 不需要修改您的应用程序来使用不熟悉的服务发现机制,Kubernetes 为容器提供了自己的 IP 地址和一组容器的单个 DNS 名称,并可以在它们之间进行负载均衡。

滚动升级和一键回滚: Kubernetes 逐渐部署对应用程序或其配置的更改,同时监视应用程序运行状况,以确保它不会同时终止所有实例。 如果出现问题,Kubernetes会为您恢复更改,利用日益增长的部署解决方案的生态系统。

私密配置文件管理. web容器里面,数据库的账户密码(测试库密码)

2.2 k8s的历史

2014年 docker容器编排工具,立项

2015年7月 发布kubernetes 1.0, 加入cncf基金会 孵化

2016年,kubernetes干掉两个对手,docker swarm,mesos marathon 1.2版

2017年 1.5 -1.9

2018年 k8s 从cncf基金会 毕业项目1.10 1.11 1.12

2019年: 1.13, 1.14 ,1.15,1.16 1.17

cncf :cloud native compute foundation 孵化器

kubernetes (k8s): 希腊语 舵手,领航者 容器编排领域,

谷歌15年容器使用经验,borg容器管理平台,使用golang重构borg,kubernetes

2.3 k8s的安装方式

yum安装 1.5 最容易安装成功,最适合学习的

源码编译安装—难度最大 可以安装最新版

二进制安装—步骤繁琐 可以安装最新版 shell,ansible,saltstack

kubeadm 安装最容易, 网络 可以安装最新版

minikube 适合开发人员体验k8s, 网络

2.4 k8s的应用场景

k8s最适合跑微服务项目!

3:k8s常用的资源

3.1 创建pod资源

pod是最小资源单位.

任何的一个k8s资源都可以由yml清单文件来定义

k8s yaml的主要组成

apiVersion: v1  api版本
kind: pod   资源类型
metadata:   属性
spec:       详细

k8s_pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: web
spec:
  containers:
    - name: nginx
      image: 10.0.0.11:5000/nginx:1.13
      ports:
        - containerPort: 80
vi k8s_pod.yml
cat k8s_pod.yml
kubectl create -f k8s_pod.yml
kubectl get pod 
kubectl describe pod nginx
  51m       51m     1   {default-scheduler }            Normal      Scheduled   Successfully assigned nginx to 10.0.0.13
  51m       45m     6   {kubelet 10.0.0.13}         Warning     FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may be because there are no credentials on this request.  details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory)"
​
  51m   43m 33  {kubelet 10.0.0.13}     Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"registry.access.redhat.com/rhel7/pod-infrastructure:latest\""

node节点:

vim /etc/kubernetes/kubelet
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=10.0.0.11:5000/pod-infrastructure:latest"
systemctl restart kubelet.service

pod资源:至少由两个容器组成,pod基础容器和业务容器组成(最多1+4)

pod配置文件2:

apiVersion: v1
kind: Pod
metadata:
  name: test
  labels:
    app: web
spec:
  containers:
    - name: nginx
      image: 10.0.0.11:5000/nginx:1.13
      ports:
        - containerPort: 80
    - name: alpine
      image: 10.0.0.11:5000/alpine:latest
      command: ["sleep","1000"]

pod是k8s最小的资源单位

3.2 ReplicationController资源

rc:保证指定数量的pod始终存活,rc通过标签选择器来关联pod

k8s资源的常见操作:
kubectl create -f xxx.yaml
kubectl get pod|rc
kubectl describe pod nginx
kubectl delete pod nginx 或者kubectl delete -f xxx.yaml
kubectl edit pod nginx

创建一个rc

apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx
spec:
  replicas: 5  #副本5
  selector:
    app: myweb
  template:
    metadata:
      labels:
        app: myweb
    spec:
      containers:
      - name: myweb
        image: 10.0.0.11:5000/nginx:1.13
        ports:
        - containerPort: 80

rc的滚动升级
新建一个nginx-rc1.15.yaml
在这里插入图片描述

升级
kubectl rolling-update nginx -f nginx-rc1.15.yaml --update-period=10s

回滚
kubectl rolling-update nginx2 -f nginx-rc.yaml --update-period=1s

3.3 service资源

service帮助pod暴露端口

创建一个service

apiVersion: v1
kind: Service   #简称svc
metadata:
  name: myweb
spec:
  type: NodePort  #默认ClusterIP
  ports:
    - port: 80          #clusterIP
      nodePort: 30000   #node port
      targetPort: 80    #pod port
  selector:
    app: myweb2

kubectl scale rc nginx --replicas=2

kubectl exec -it pod_name /bin/bash

修改nodePort范围

vim  /etc/kubernetes/apiserver
KUBE_API_ARGS="--service-node-port-range=3000-50000"

命令行创建service资源

kubectl expose rc nginx --type=NodePort --port=80

service默认使用iptables来实现负载均衡, k8s 1.8新版本中推荐使用lvs(四层负载均衡 传输层tcp,udp)

3.4 deployment资源

有rc在滚动升级之后,会造成服务访问中断,于是k8s引入了deployment资源

创建deployment

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 3
  minReadySeconds: 60
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: 10.0.0.11:5000/nginx:1.13
        ports:
        - containerPort: 80
        resources:  
          limits:
            cpu: 100m
          requests:
            cpu: 100m

deployment升级和回滚

命令行创建deployment

kubectl run nginx --image=10.0.0.11:5000/nginx:1.13 --replicas=3 --record

命令行升级版本

kubectl set image deployment nginx nginx=10.0.0.11:5000/nginx:1.15

查看deployment所有历史版本

kubectl rollout history deployment nginx

deployment回滚到上一个版本

kubectl rollout undo deployment nginx

deployment回滚到指定版本

kubectl rollout undo deployment nginx --to-revision=2

4:k8s的附加组件

k8s集群中dns服务的作用,就是讲svc的名称解析成对应VIP地址

4.1 dns服务

安装dns服务

1:下载dns_docker镜像包(node2节点10.0.0.13)

wget http://192.168.37.200/191127/docker_k8s_dns.tar.gz

2:导入dns_docker镜像包(node2节点10.0.0.13)

3:创建dns服务

vi  skydns-rc.yaml
...
  spec:
    nodeName: 10.0.0.13
    containers:
​
kubectl  create  -f   skydns-rc.yaml
kubectl create -f skydns-svc.yaml

4:检查

kubectl get all --namespace=kube-system

5:修改所有node节点kubelet的配置文件

vim  /etc/kubernetes/kubelet
KUBELET_ARGS="--cluster_dns=10.254.230.254 --cluster_domain=cluster.local"
​
systemctl   restart kubelet
4.2 namespace命令空间

namespace做资源隔离

4.3 健康检查和可用性检查
4.3.1 探针的种类
  • livenessProbe:健康状态检查,周期性检查服务是否存活,检查结果失败,将重启容器
  • readinessProbe:可用性检查,周期性检查服务是否可用,不可用将从service的endpoints中移除
4.3.2 探针的检测方法
  • exec:执行一段命令 返回值为0, 非0
  • httpGet:检测某个 http 请求的返回状态码 2xx,3xx正常, 4xx,5xx错误
  • tcpSocket:测试某个端口是否能够连接
4.3.3 liveness探针的exec使用
vi  nginx_pod_exec.yaml 
iapiVersion: v1
kind: Pod
metadata:
  name: exec
spec:
  containers:
    - name: nginx
      image: 10.0.0.11:5000/nginx:1.13
      ports:
        - containerPort: 80
      args:
        - /bin/sh
        - -c
        - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
      livenessProbe:
        exec:
          command:
            - cat
            - /tmp/healthy
        initialDelaySeconds: 5   
        periodSeconds: 5
        timeoutSeconds: 5
        successThreshold: 1
        failureThreshold: 1
4.3.4 liveness探针的httpGet使用
vi   nginx_pod_httpGet.yaml 
iapiVersion: v1
kind: Pod
metadata:
  name: httpget
spec:
  containers:
    - name: nginx
      image: 10.0.0.11:5000/nginx:1.13
      ports:
        - containerPort: 80
      livenessProbe:
        httpGet:
          path: /index.html
          port: 80
        initialDelaySeconds: 3
        periodSeconds: 3
4.3.5 liveness探针的tcpSocket使用
vi   nginx_pod_tcpSocket.yaml
iapiVersion: v1
kind: Pod
metadata:
  name: tcpSocket
spec:
  containers:
    - name: nginx
      image: 10.0.0.11:5000/nginx:1.13
      ports:
        - containerPort: 80
      args:
        - /bin/sh
        - -c
        - tail -f /etc/hosts
      livenessProbe:
        tcpSocket:
          port: 80
        initialDelaySeconds: 10
        periodSeconds: 3
4.3.6 readiness探针的httpGet使用
vi   nginx-rc-httpGet.yaml
iapiVersion: v1
kind: ReplicationController
metadata:
  name: readiness
spec:
  replicas: 2
  selector:
    app: readiness
  template:
    metadata:
      labels:
        app: readiness
    spec:
      containers:
      - name: readiness
        image: 10.0.0.11:5000/nginx:1.13
        ports:
        - containerPort: 80
        readinessProbe:
          httpGet:
            path: /qiangge.html
            port: 80
          initialDelaySeconds: 3
          periodSeconds: 3
4.4 dashboard服务

1:上传并导入镜像,打标签

2:创建dashborad的deployment和service

3:访问http://10.0.0.11:8080/ui/

4.5 通过apiservicer反向代理访问service
第一种:NodePort类型 
  type: NodePort
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30008
​
第二种:ClusterIP类型
  type: ClusterIP
  ports:
    - port: 80
      targetPort: 80
      
http://10.0.0.11:8080/api/v1/proxy/namespaces/命令空间/services/service的名字/
#例子:
http://10.0.0.11:8080/api/v1/proxy/namespaces/qiangge/services/wordpress

5: k8s弹性伸缩

k8s弹性伸缩,需要附加插件heapster监控

5.1 安装heapster监控

1:上传并导入镜像,打标签

ls *.tar.gz
for n in `ls *.tar.gz`;do docker load -i $n ;done
docker tag docker.io/kubernetes/heapster_grafana:v2.6.0 10.0.0.11:5000/heapster_grafana:v2.6.0
docker tag  docker.io/kubernetes/heapster_influxdb:v0.5 10.0.0.11:5000/heapster_influxdb:v0.5
docker tag docker.io/kubernetes/heapster:canary 10.0.0.11:5000/heapster:canary

2:上传配置文件,kubectl create -f .

修改配置文件:
#heapster-controller.yaml
    spec:
      nodeName: 10.0.0.13
      containers:
      - name: heapster
        image: 10.0.0.11:5000/heapster:canary
        imagePullPolicy: IfNotPresent
#influxdb-grafana-controller.yaml
    spec:
      nodeName: 10.0.0.13
      containers:

3:打开dashboard验证
在这里插入图片描述

5.2 弹性伸缩

1:修改rc的配置文件

  containers:
  - name: myweb
    image: 10.0.0.11:5000/nginx:1.13
    ports:
    - containerPort: 80
    resources:
      limits:
        cpu: 100m
      requests:
        cpu: 100m

2:创建弹性伸缩规则

kubectl autoscale deploy nginx-deployment --max=8 --min=1 --cpu-percent=10

3:测试

ab -n 1000000 -c 40 http://10.0.0.12:33218/index.html

扩容截图
在这里插入图片描述

缩容:

在这里插入图片描述

6:持久化存储

数据持久化类型:

6.1 emptyDir:
spec:
  nodeName: 10.0.0.13
  volumes:
  - name: mysql
    emptyDir: {}
  containers:
    - name: wp-mysql
      image: 10.0.0.11:5000/mysql:5.7
      imagePullPolicy: IfNotPresent
      ports:
      - containerPort: 3306
      volumeMounts:
      - mountPath: /var/lib/mysql
        name: mysql
6.2 HostPath:
spec:
  nodeName: 10.0.0.12
  volumes:
  - name: mysql
    hostPath:
      path: /data/wp_mysql
  containers:
    - name: wp-mysql
      image: 10.0.0.11:5000/mysql:5.7
      imagePullPolicy: IfNotPresent
      ports:
      - containerPort: 3306
      volumeMounts:
      - mountPath: /var/lib/mysql
        name: mysql
6.3 nfs:
  volumes:
  - name: mysql
    nfs:
      path: /data/wp_mysql
      server: 10.0.0.11
6.4 pv和pvc:

pv: persistent volume 全局资源,k8s集群

pvc: persistent volume claim, 局部资源属于某一个namespace

6.4.1:安装nfs服务端(10.0.0.11)
yum install nfs-utils.x86_64 -y
mkdir /data
vim /etc/exports
/data  10.0.0.0/24(rw,async,no_root_squash,no_all_squash)
systemctl start rpcbind
systemctl start nfs
6.4.2:在node节点安装nfs客户端
yum install nfs-utils.x86_64 -y
showmount -e 10.0.0.11
6.4.3:创建pv和pvc

上传yaml配置文件,创建pv和pvc

6.4.4:创建mysql-rc,pod模板里使用volume
  volumes:
  - name: mysql
    persistentVolumeClaim:
      claimName: tomcat-mysql
6.4.5: 验证持久化

验证方法1:删除mysql的pod,数据库不丢

kubectl delete pod mysql-gt054

验证方法2:查看nfs服务端,是否有mysql的数据文件

在这里插入图片描述

6.5: 分布式存储glusterfs

a: 什么是glusterfs

Glusterfs是一个开源分布式文件系统,具有强大的横向扩展能力,可支持数PB存储容量和数千客户端,通过网络互联成一个并行的网络文件系统。具有可扩展性、高性能、高可用性等特点。

b: 安装glusterfs

所有节点:
yum install centos-release-gluster6.noarch -y
yum install  glusterfs-server -y
systemctl start glusterd.service
systemctl enable glusterd.service
#为gluster集群增加存储单元brick
mkdir -p /gfs/test1
mkdir -p /gfs/test2
mkdir -p /gfs/test3

c: 添加存储资源池

master节点:
gluster pool list
gluster peer probe k8s-node1
gluster peer probe k8s-node2
gluster pool list

d: glusterfs卷管理

#创建分布式复制卷
gluster volume create qiangge replica 2 k8s-master:/gfs/test1 k8s-node-1:/gfs/test1 k8s-master:/gfs/test2  k8s-node-1:/gfs/test2 force
#启动卷
gluster volume start qiangge
#查看卷
gluster volume info qiangge 
#挂载卷
mount -t glusterfs 10.0.0.11:/qiangge /mnt

e: 分布式复制卷讲解

在这里插入图片描述

f: 分布式复制卷扩容

#扩容前查看容量:
df   -h
​
#扩容命令:
gluster volume add-brick qiangge k8s-node-2:/gfs/test1 k8s-node-2:/gfs/test2 force
​
#扩容后查看容量:
df   -h
6.6 k8s 对接glusterfs存储

a:创建endpoint

vi  glusterfs-ep.yaml
iapiVersion: v1
kind: Endpoints
metadata:
  name: glusterfs
  namespace: tomcat
subsets:
- addresses:
  - ip: 10.0.0.11
  - ip: 10.0.0.12
  - ip: 10.0.0.13
  ports:
  - port: 49152
    protocol: TCP

b: 创建service

vi  glusterfs-svc.yaml
iapiVersion: v1
kind: Service
metadata:
  name: glusterfs
  namespace: tomcat
spec:
  ports:
  - port: 49152
    protocol: TCP
    targetPort: 49152
  type: ClusterIP

c: 创建gluster类型pv

apiVersion: v1
kind: PersistentVolume
metadata:
  name: gluster
  labels:
    type: glusterfs
spec:
  capacity:
    storage: 50Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: "glusterfs"
    path: "qiangge"
    readOnly: false

d: 创建pvc

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: gluster
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 200Gi

e:在pod中使用gluster

vi  nginx_pod.yaml
…… 
        volumeMounts:
        - name: nfs-vol2
          mountPath: /usr/share/nginx/html
     volumes:
     - name: nfs-vol2
       persistentVolumeClaim:
         claimName: gluster
 

7:使用jenkins实现持续部署

ip地址服务内存
10.0.0.11kube-apiserver 80801G
10.0.0.12kube-apiserver 80801G
10.0.0.13jenkins(tomcat + jdk) 80803G

代码仓库使用gitee托管

在这里插入图片描述

7.1: 安装gitlab并上传代码
#a:安装
wget https://mirrors.tuna.tsinghua.edu.cn/gitlab-ce/yum/el7/gitlab-ce-11.9.11-ce.0.el7.x86_64.rpm
yum localinstall gitlab-ce-11.9.11-ce.0.el7.x86_64.rpm -y
#b:配置
vim /etc/gitlab/gitlab.rb
external_url 'http://10.0.0.13'
prometheus_monitoring['enable'] = false
#c:应用并启动服务
gitlab-ctl reconfigure
​
#使用浏览器访问http://10.0.0.13,修改root用户密码,创建project
​
#上传代码到git仓库
cd /srv/
rz -E
unzip xiaoniaofeifei.zip 
rm -fr xiaoniaofeifei.zip 
​
git config --global user.name "Administrator"
git config --global user.email "admin@example.com"
git init
git remote add origin http://10.0.0.13/root/xiaoniao.git
git add .
git commit -m "Initial commit"
git push -u origin master
7.2 安装jenkins,并自动构建docker镜像
1:安装jenkins
cd /opt/
wget   http://192.168.12.201/191216/apache-tomcat-8.0.27.tar.gz 
wget   http://192.168.12.201/191216/jdk-8u102-linux-x64.rpm     
wget   http://192.168.12.201/191216/jenkin-data.tar.gz       
wget   http://192.168.12.201/191216/jenkins.war                       
rpm -ivh jdk-8u102-linux-x64.rpm 
mkdir /app -p
tar xf apache-tomcat-8.0.27.tar.gz -C /app
rm -fr /app/apache-tomcat-8.0.27/webapps/*
mv jenkins.war /app/apache-tomcat-8.0.27/webapps/ROOT.war
tar xf jenkin-data.tar.gz -C /root
/app/apache-tomcat-8.0.27/bin/startup.sh 
netstat -lntup
2:访问jenkins

访问http://10.0.0.12:8080/,默认账号密码admin:123456

3:配置jenkins拉取gitlab代码凭据

a:在jenkins上生成秘钥对

ssh-keygen -t rsa

b:复制公钥粘贴gitlab上

在这里插入图片描述

c:jenkins上创建全局凭据

在这里插入图片描述

4:拉取代码测试

在这里插入图片描述

5:编写dockerfile并测试
#vim dockerfile
FROM 10.0.0.11:5000/nginx:1.13
add .  /usr/share/nginx/html

添加docker build构建时不add的文件

vim .dockerignore
dockerfile

docker build -t xiaoniao:v1 .
docker run -d -p 88:80 xiaoniao:v1

打开浏览器测试访问xiaoniaofeifei的项目

6:上传dockerfile和.dockerignore到私有仓库

git add docker .dockerignore
git commit -m “fisrt commit”
git push -u origin master

7:点击jenkins立即构建,自动构建docker镜像并上传到私有仓库

修改jenkins 工程配置

在这里插入图片描述

docker build -t 10.0.0.11:5000/test:v B U I L D I D . d o c k e r p u s h 10.0.0.11 : 5000 / t e s t : v BUILD_ID . docker push 10.0.0.11:5000/test:v BUILDID.dockerpush10.0.0.11:5000/test:vBUILD_ID

7.3 jenkins自动部署应用到k8s

kubectl -s 10.0.0.11:8080 get nodes

if [ -f /tmp/xiaoniao.lock ];then
    docker  build  -t  10.0.0.11:5000/xiaoniao:v$BUILD_ID  .
    docker  push 10.0.0.11:5000/xiaoniao:v$BUILD_ID
    kubectl -s 10.0.0.11:8080 set image  -n xiaoniao deploy xiaoniao xiaoniao=10.0.0.11:5000/xiaoniao:v$BUILD_ID
    port=`kubectl -s 10.0.0.11:8080  get svc -n xiaoniao|grep -oP '(?<=80:)\d+'`
    echo "你的项目地址访问是http://10.0.0.13:$port"
    echo "更新成功"
else
    docker  build  -t  10.0.0.11:5000/xiaoniao:v$BUILD_ID  .
    docker  push 10.0.0.11:5000/xiaoniao:v$BUILD_ID
    kubectl  -s 10.0.0.11:8080  create  namespace  xiaoniao
    kubectl  -s 10.0.0.11:8080  run   xiaoniao  -n xiaoniao  --image=10.0.0.11:5000/xiaoniao:v$BUILD_ID --replicas=3 --record
    kubectl  -s 10.0.0.11:8080   expose -n xiaoniao deployment xiaoniao --port=80 --type=NodePort
    port=`kubectl -s 10.0.0.11:8080  get svc -n xiaoniao|grep -oP '(?<=80:)\d+'`
    echo "你的项目地址访问是http://10.0.0.13:$port"
    echo "发布成功"
    touch /tmp/xiaoniao.lock
    chattr +i /tmp/xiaoniao.lock
fi

jenkins一键回滚

kubectl -s 10.0.0.11:8080 rollout undo -n xiaoniao deployment xiaoniao

8: k8s高可用

8.1: 安装配置etcd高可用集群
#所有节点安装etcd
yum install  etcd  -y
3:ETCD_DATA_DIR="/var/lib/etcd/"
5:ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
6:ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
9:ETCD_NAME="node1"  #节点的名字
20:ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.0.0.11:2380"   #节点的同步数据的地址
21:ETCD_ADVERTISE_CLIENT_URLS="http://10.0.0.11:2379"         #节点对外提供服务的地址
26:ETCD_INITIAL_CLUSTER="node1=http://10.0.0.11:2380,node2=http://10.0.0.12:2380,node3=http://10.0.0.13:2380"
27:ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
28:ETCD_INITIAL_CLUSTER_STATE="new"
​
systemctl enable  etcd
systemctl restart  etcd
​
[root@k8s-master tomcat_demo]# etcdctl cluster-health
member 9e80988e833ccb43 is healthy: got healthy result from http://10.0.0.11:2379
member a10d8f7920cc71c7 is healthy: got healthy result from http://10.0.0.13:2379
member abdc532bc0516b2d is healthy: got healthy result from http://10.0.0.12:2379
cluster is healthy
​
#修改flannel
vim  /etc/sysconfig/flanneld
FLANNEL_ETCD_ENDPOINTS="http://10.0.0.11:2379,http://10.0.0.12:2379,http://10.0.0.13:2379"
etcdctl mk /atomic.io/network/config   '{ "Network": "172.18.0.0/16" }'
systemctl  restart flanneld
systemctl  restart docker
8.2 安装配置master01的api-server,controller-manager,scheduler(127.0.0.1:8080)
vim /etc/kubernetes/apiserver 
KUBE_ETCD_SERVERS="--etcd-servers=http://10.0.0.11:2379,http://10.0.0.12:2379,http://10.0.0.13:2379"
​
vim /etc/kubernetes/config 
KUBE_MASTER="--master=http://127.0.0.1:8080"
​
systemctl restart kube-apiserver.service 
systemctl restart kube-controller-manager.service kube-scheduler.service 
8.3 安装配置master02的api-server,controller-manager,scheduler(127.0.0.1:8080)
yum install kubernetes-master.x86_64 -y
scp -rp 10.0.0.11:/etc/kubernetes/apiserver /etc/kubernetes/apiserver
scp -rp 10.0.0.11:/etc/kubernetes/config /etc/kubernetes/config 
systemctl stop kubelet.service 
systemctl disable kubelet.service
systemctl stop kube-proxy.service 
systemctl disable kube-proxy.service
​
systemctl enable kube-apiserver.service
systemctl restart kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl restart kube-controller-manager.service
systemctl enable kube-scheduler.service
systemctl restart kube-scheduler.service
8.4 为master01和master02安装配置Keepalived
yum install keepalived.x86_64 -y
​
#master01配置:
! Configuration File for keepalived
​
global_defs {
   router_id LVS_DEVEL_11
}
​
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.0.0.10
    }
}
​
#master02配置
! Configuration File for keepalived
​
global_defs {
   router_id LVS_DEVEL_12
}
​
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 51
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.0.0.10
    }
}
​
systemctl  enable  keepalived
systemctl  start   keepalived
8.5: 所有node节点kubelet,kube-proxy指向api-server的vip
vim /etc/kubernetes/kubelet
KUBELET_API_SERVER="--api-servers=http://10.0.0.10:8080"
​
vim /etc/kubernetes/config 
KUBE_MASTER="--master=http://10.0.0.10:8080"
​
systemctl restart kubelet.service kube-proxy.service
  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值