Jenkins笔记05-基于KubernetesK8S构建Jenkins持续集成平台

Jenkins的Master-Slave分布式构建

什么是Master-Slave分布式构建

Jenkins的Master-Slave分布式构建,就是通过将构建过程分配到从属Slave结点上,从而减轻Master结点的压力,而且可以同时构建多个,有点类似负载均衡的概念。
这节课要用到3台服务器:1台K8S-Master,2台K8S-Slave,提前准备出来,安装上Jenkins。

如何实现Master-Slave分布式构建

Manage Jenkins-Configure Global Security-Agents,选择指定端口或随机端口,这里我们选择随机端口,Apply,Save。
Manage Jenkins-Manage Node and Clouds-New Node,输入名称选择Permanent Agent后就可以创建一个从结点,接下来提示要输入一些信息,这里的/root/jenkins是工作目录,如果不存在,需要在从结点服务器上先创建出来,Save。
在这里插入图片描述
点击进刚刚创建的从结点,这里给了两种方式,我们选择第一种,把agent.jar下载下来,传到从结点服务器上,然后在从结点上执行命令,稍等一会儿,就可以看到slave1上线了。

[root@localhost ~]# java -jar agent.jar -jnlpUrl http://192.168.216.123:8083/computer/slave1/jenkins-agent.jnlp -secret 0d7993de62513c9860305e4075c0016d3be27fa4da006a9c1ac76e3064705d19 -workDir "/root/jenkins"

然后对slave1进行测试,看它能不能正常编译运行项目,分别测试FreeStyle Project和Pipeline Project。
在Jenkins-Master服务器上找一个FreeStyle Project,勾选Restrict where this project can be run,并输入slave1,表示在slave1上运行,Apply,Save,手动触发一次构建。
对于Pipeline Project,在Pipeline Script里指定slave1即可,举例:

node('slave1') {
    stage('拉取代码') {
        echo '拉取代码'
    }
    stage('编译构建') {
        echo '编译构建'
    }
    stage('项目部署') {
        echo '项目部署'
    }
}

Kubernetes实现Master-Slave分布式构建方案

传统Jenkins的Master-Slave方案的缺陷

  • Master节点发生单点故障时,整个流程都不可用了
  • 每个Slave节点的配置环境不一样,来完成不同语言的编译打包等操作,但是这些差异化的配置导致管理起来非常不方便,维护起来也是比较费劲
  • 资源分配不均衡,有的Slave节点要运行的job出现排队等待,而有的Slave节点处于空闲状态
  • 资源浪费,每台Slave节点可能是实体机或者VM,当Slave节点处于空闲状态时,也不会完全释放掉资源

为了解决这些问题,引入Kubernetes。

Kubernetes简介

Kubernetes(简称,K8S)是Google开源的容器集群管理系统,在Docker技术的基础上,为容器化的应用提供部署运行、资源调度、服务发现和动态伸缩等一系列完整功能,提高了大规模容器集群管理的便捷性。 其主要功能如下:

  • 使用Docker对应用程序包装(package)、实例化(instantiate)、运行(run)
  • 以集群的方式运行、管理跨机器的容器。以集群的方式运行、管理跨机器的容器
  • 解决Docker跨机器容器之间的通讯问题。解决Docker跨机器容器之间的通讯问题
  • Kubernetes的自我修复机制使得容器集群总是运行在用户期望的状态

Kubernetes+Docker+Jenkins持续集成架构图

在这里插入图片描述
工作流程::手动/自动构建→Jenkins 调度K8S API→动态生成Jenkins Slave pod→Slave pod拉取Git代码/编译/打包镜像→推送到镜像仓库Harbor→Slave工作完成,Pod自动销毁→部署到测试或生产Kubernetes平台,完全自动化,无需人工干预。

Kubernetes+Docker+Jenkins持续集成方案好处

  • 服务高可用:当Jenkins Master出现故障时,Kubernetes会自动创建一个新的Jenkins Master容器,并且将Volume分配给新创建的容器,保证数据不丢失,从而达到集群服务高可用
  • 动态伸缩,合理使用资源:每次运行Job时,会自动创建一个Jenkins Slave,Job完成后,Slave自动注销并删除容器,资源自动释放,而且 Kubernetes会根据每个资源的使用情况,动态分配Slave到空闲的节点上创建,降低出现因某节点资源利用率高,还排队等待在该节点的情况
  • 扩展性好:当Kubernetes集群的资源严重不足而导致Job排队等待时,可以很容易的添加一个Kubernetes Node到集群中,从而实现扩展

Kubeadm安装Kubernetes

Kubernetes的架构

在这里插入图片描述

  • API Server:用于暴露Kubernetes API,任何资源的请求的调用操作都是通过kube-apiserver提供的接口进行的
  • Etcd:是Kubernetes提供默认的存储系统,保存所有集群数据,使用时需要为etcd数据提供备份计划
  • Controller-Manager:作为集群内部的管理控制中心,负责集群内的Node、Pod副本、服务端点(Endpoint)、命名空间(Namespace)、服务账号(ServiceAccount)、资源定额(ResourceQuota)的管理,当某个Node意外宕机时,Controller Manager会及时发现并执行自动化修复流程,确保集群始终处于预期的工作状态
  • Scheduler:监视新创建没有分配到Node的Pod,为Pod选择一个Node
  • Kubelet:负责维护容器的生命周期,同时负责Volume和网络的管理
  • Kube proxy:是Kubernetes的核心组件,部署在每个Node节点上,它是实现Kubernetes Service的通信与负载均衡机制的重要组件

安装环境说明

接下来,我将准备192.168.216.123,192.168.216.234,192.168.216.235,这3台机器。123作为K8S-Master,234和235作为两个K8S-node。这里用234克隆出来一个虚拟机作为235,在克隆的时候,要选择“完整克隆”,然后修改235机器的ip和hostname。

[root@localhost ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33
[root@localhost ~]# vim /etc/hostname

安装完成后,123服务器上要有:GitLab,Harbor,kube-apiserver,kube-controller-manager,kubescheduler,docker,etcd,calico,NFS,在234和235服务器上要有:kubelet、kubeproxy、docker。

三台机器都需要完成

没有特别标注的,表示3台机器都需要执行。

# 123修改hostname
[root@localhost ~]# hostnamectl set-hostname k8s-master
# 234修改hostname
[root@localhost ~]# hostnamectl set-hostname k8s-node1
# 235修改hostname
[root@localhost ~]# hostnamectl set-hostname k8s-node2
# 添加ip和主机名的映射
[root@localhost ~]# vim /etc/hosts
192.168.216.123 k8s-master
192.168.216.234 k8s-node1
192.168.216.235 k8s-node2
# 关闭防火墙和SELinux
[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# systemctl disable firewalld
# 临时关闭
[root@localhost ~]# setenforce 0
# 永久关闭:SELINUX=disabled
[root@localhost ~]# vim /etc/sysconfig/selinux
# 设置允许路由转发,不对bridge数据进行处理
[root@localhost ~]# vim /etc/sysctl.d/k8s.conf
k8s.conf
```conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
# 执行文件
[root@localhost ~]# sysctl -p /etc/sysctl.d/k8s.conf
# kube-proxy开启ipvs前置条件
[root@localhost ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
[root@localhost ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash
[root@localhost ~]# /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
# 临时关闭swap
[root@localhost ~]# swapoff -a
# 永久关闭swap:注释掉/dev/mapper/centos-swap swap这一行
[root@localhost ~]# vim /etc/fstab
# 安装kubelet(在集群中的每个结点用来启动pod和container等)、kubeadm(初始化集群命令)、kubectl(用来与集群通信的命令行工具)
# 清空yum缓存
[root@localhost ~]# yum clean all
# 设置yum源
[root@localhost ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 安装
[root@localhost ~]# yum install -y kubelet kubeadm kubectl
# kubelet设置开机启动,现在先不启动,会报错
[root@localhost ~]# systemctl enable kubelet
# 查看版本
[root@localhost ~]# kubelet --version

Master结点需要完成

# 初始化命令:--kubernetes-version写正确的版本号,--apiserver-advertise-address写master的ip
[root@k8s-master ~]# kubeadm init --kubernetes-version=1.24.0 \
--apiserver-advertise-address=192.168.216.123 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16

这里我启动的时候,报了一个错误 [ERROR CRI]: container runtime is not running: output: time="2022-05-21T11:31:lpha2.RuntimeService" , error: exit status 1,来到/etc/containerd/config.toml,把disabled_plugins = ["cri"]先注释掉,然后执行systemctl restart containerd命令后,再次初始化master,这里需要等待比较长时间,如果还是报错,执行kubeadm reset命令进行重置,然后在执行初始化操作。
无语了,折腾了很久,还是报错,依旧无法解决,我只是开发,不是运维,这块实在是没有头绪,有知道原因的大佬给指点一下,非常感谢!所以,后面的内容就是跟着视频教程比葫芦画瓢了,我没有实际操作。

[root@k8s-master /]# kubeadm init --kubernetes-version=1.24.0 \
--apiserver-advertise-address=192.168.216.123 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.24.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.216.123]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.216.123 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.216.123 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
        - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
[root@k8s-master /]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since 六 2022-05-21 14:42:08 CST; 5min ago
     Docs: https://kubernetes.io/docs/
 Main PID: 110553 (kubelet)
    Tasks: 19
   Memory: 31.6M
   CGroup: /system.slice/kubelet.service
           └─110553 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remo...

521 14:47:16 k8s-master kubelet[110553]: E0521 14:47:16.592218  110553 kubelet.go:2419] "Error getting node" err="node \"k8s-master\" not found"
521 14:47:16 k8s-master kubelet[110553]: E0521 14:47:16.693155  110553 kubelet.go:2419] "Error getting node" err="node \"k8s-master\" not found"
521 14:47:16 k8s-master kubelet[110553]: E0521 14:47:16.793613  110553 kubelet.go:2419] "Error getting node" err="node \"k8s-master\" not found"
521 14:47:16 k8s-master kubelet[110553]: E0521 14:47:16.894611  110553 kubelet.go:2419] "Error getting node" err="node \"k8s-master\" not found"
521 14:47:16 k8s-master kubelet[110553]: E0521 14:47:16.995076  110553 kubelet.go:2419] "Error getting node" err="node \"k8s-master\" not found"
521 14:47:17 k8s-master kubelet[110553]: E0521 14:47:17.095593  110553 kubelet.go:2419] "Error getting node" err="node \"k8s-master\" not found"
521 14:47:17 k8s-master kubelet[110553]: E0521 14:47:17.195746  110553 kubelet.go:2419] "Error getting node" err="node \"k8s-master\" not found"
521 14:47:17 k8s-master kubelet[110553]: E0521 14:47:17.297928  110553 kubelet.go:2419] "Error getting node" err="node \"k8s-master\" not found"
521 14:47:17 k8s-master kubelet[110553]: E0521 14:47:17.398256  110553 kubelet.go:2419] "Error getting node" err="node \"k8s-master\" not found"
521 14:47:17 k8s-master kubelet[110553]: E0521 14:47:17.498668  110553 kubelet.go:2419] "Error getting node" err="node \"k8s-master\" not found"

如果初始化成功,需要把最后的kubeadm join的命令记录下来,一会儿在node1和node2上要使用。

# 重新启动kubelet
[root@k8s-master /]# systemctl restart kubelet
# 配置kubectl工具
[root@k8s-master /]# mkdir -p $HOME/.kube
[root@k8s-master /]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master /]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 准备下载Calico(一个网络通信组件)
[root@k8s-master /]# cd /opt/software/
[root@k8s-master software]# mkdir k8s
[root@k8s-master software]# cd k8s
[root@k8s-master k8s]# wget https://docs.projectcalico.org/v3.10/gettingstarted/kubernetes/installation/hosted/kubernetes-datastore/caliconetworking/1.7/calico.yaml
# 替换结点ip,这样才能进行跨结点通信
[root@k8s-master k8s]# sed -i 's/192.168.0.0/10.244.0.0/g' calico.yaml
# 执行安装
[root@k8s-master k8s]# kubectl apply -f calico.yaml
# 等待几分钟,确保所有Pod都是Running状态
[root@k8s-master k8s]# kubectl get pod --all-namespaces -o wide

Slave结点需要完成

找到之前在主结点执行init命令时候的最后几行命令,这里要用到,用于将从结点加到集群中。

# 将从结点加到集群中
[root@k8s-node1 ~]# kubeadm join xxx
[root@k8s-node2 ~]# kubeadm join xxx
# 启动从结点
[root@k8s-node1 ~]# systemctl start kubelet
[root@k8s-node2 ~]# systemctl start kubelet
# 回到master结点查看,如果全是Ready,说明K8S集群环境搭建成功
[root@k8s-master ~]# kubectl get nodes

kubectl常用命令

kubectl get nodes 查看所有主从节点的状态
kubectl get ns 获取所有namespace资源
kubectl get pods -n {$nameSpace} 获取指定namespace的pod
kubectl describe pod的名称 -n {$nameSpace} 查看某个pod的执行过程
kubectl logs --tail=1000 pod的名称 | less 查看日志
kubectl create -f xxx.yml 通过配置文件创建一个集群资源对象
kubectl delete -f xxx.yml 通过配置文件删除一个集群资源对象
kubectl delete pod名称 -n {$nameSpace} 通过pod删除集群资源
kubectl get service -n {$nameSpace} 查看pod的service情况

安装和配置NFS

NFS简介

NFS(Network File System),它最大的功能就是可以通过网络,让不同的机器、不同的操作系统可以共享彼此的文件。我们可以利用NFS共享Jenkins运行的配置文件、Maven的仓库依赖文件等。

NFS安装

# 在主结点和从结点上安装NFS
[root@k8s-master ~]# yum install -y nfs-utils
# 在主结点创建共享目录
[root@k8s-master ~]# mkdir -p /opt/nfs/jenkins
# 编写NFS共享配置
[root@k8s-master ~]# vim /etc/exports
# 内容如下,*代表对所有ip开放,rw代表读写,no_root_squash代表如果访问共享目录是root用户的话,它就有root权限
/opt/nfs/jenkins *(rw,no_root_squash) 
# 设置NFS开机启动
[root@k8s-master ~]# systemctl enable nfs
# 启动NFS
[root@k8s-master ~]# systemctl start nfs
# 在从结点查看主结点共享目录
[root@k8s-node1 ~]# showmount -e 192.168.216.123

在Kubernetes安装Jenkins-Master

这块太复杂了,我都懵了,跳过吧,学完K8S可能就明白什么意思了。

创建NFS client provisioner

nfs-client-provisioner是一个Kubernetes的简易NFS的外部provisioner,本身不提供NFS,需要现有的NFS服务器提供存储,这里用到3个文件:class.yaml,deployment.yaml,rbac.yaml,创建一个目录nfs-client-provisioner,将文件上传到里面。
class.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "true"

deployment.yaml(修改spec.template.spec.containers.env中NFS_SERVER的value和NFS_PATH的value,指向主结点NFS的ip和路径)

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: lizhenliang/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 192.168.216.123
            - name: NFS_PATH
              value: /opt/nfs/jenkins/
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.66.101
            path: /opt/nfs/jenkins/

rbac.yaml

kind: ServiceAccount
apiVersion: v1
metadata:
  name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
# 进入刚才3个文件的目录,构建nfs-client-provisioner的pod资源
[root@k8s-master nfs-client-provisioner]# kubectl create -f .
# 查看pod是否创建成功
[root@k8s-master nfs-client-provisioner]# kubectl get pods

安装Jenkins-Master

这里需要4个文件:rbac.yaml,Service.yaml,ServiceaAcount.yaml,StatefulSet.yaml。创建一个jenkins-master目录,将文件上传到里面。
rbac.yaml

kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: jenkins
  namespace: kube-ops
rules:
  - apiGroups: ["extensions", "apps"]
    resources: ["deployments"]
    verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]
  - apiGroups: [""]
    resources: ["services"]
    verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["create","delete","get","list","patch","update","watch"]
  - apiGroups: [""]
    resources: ["pods/exec"]
    verbs: ["create","delete","get","list","patch","update","watch"]
  - apiGroups: [""]
    resources: ["pods/log"]
    verbs: ["get","list","watch"]
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: jenkins
  namespace: kube-ops
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: jenkins
subjects:
  - kind: ServiceAccount
    name: jenkins
    namespace: kube-ops
    
---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: jenkinsClusterRole
  namespace: kube-ops
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
  resources: ["pods/exec"]
  verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
  resources: ["pods/log"]
  verbs: ["get","list","watch"]
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get"]
 
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: jenkinsClusterRuleBinding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: jenkinsClusterRole
subjects:
- kind: ServiceAccount
  name: jenkins
  namespace: kube-ops

Service.yaml(spec.type=NodePort采用)

apiVersion: v1
kind: Service
metadata:
  name: jenkins
  namespace: kube-ops
  labels:
    app: jenkins
spec:
  selector:
    app: jenkins
  type: NodePort
  ports:
  - name: web
    port: 8080
    targetPort: web
  - name: agent
    port: 50000
    targetPort: agent

ServiceaAcount.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: jenkins
  namespace: kube-ops

StatefulSet.yaml(文件中的spec.volumeClaimTemplates.spec.storageClassName的值要和nfs-client-provisioner.class.yaml里的metadata.name保持一致)

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: jenkins
  labels:
    name: jenkins
  namespace: kube-ops
spec:
  serviceName: jenkins
  selector:
    matchLabels:
      app: jenkins
  replicas: 1
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      name: jenkins
      labels:
        app: jenkins
    spec:
      terminationGracePeriodSeconds: 10
      serviceAccountName: jenkins
      containers:
        - name: jenkins
          image: jenkins/jenkins:lts-alpine
          imagePullPolicy: IfNotPresent
          ports:
          - containerPort: 8080
            name: web
            protocol: TCP
          - containerPort: 50000
            name: agent
            protocol: TCP
          resources:
            limits:
              cpu: 1
              memory: 1Gi
            requests:
              cpu: 0.5
              memory: 500Mi
          env:
            - name: LIMITS_MEMORY
              valueFrom:
                resourceFieldRef:
                  resource: limits.memory
                  divisor: 1Mi
            - name: JAVA_OPTS
              value: -Xmx$(LIMITS_MEMORY)m -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
          volumeMounts:
            - name: jenkins-home
              mountPath: /var/jenkins_home
          livenessProbe:
            httpGet:
              path: /login
              port: 8080
            initialDelaySeconds: 60
            timeoutSeconds: 5
            failureThreshold: 12
          readinessProbe:
            httpGet:
              path: /login
              port: 8080
            initialDelaySeconds: 60
            timeoutSeconds: 5
            failureThreshold: 12
      securityContext:
        fsGroup: 1000
  volumeClaimTemplates:
  - metadata:
      name: jenkins-home
    spec:
      storageClassName: "managed-nfs-storage"
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi
# 创建kube-ops的namespace
[root@k8s-master jenkins-master]# kubectl create namespace kube-ops
# 来到jenkins-master目录,构建Jenkins-Master的pod资源
[root@k8s-master jenkins-master]# kubectl create -f .
# 查看pod是否创建成功
[root@k8s-master jenkins-master]# kubectl get pods -n kube-ops
# 查看信息并访问
[root@k8s-master jenkins-master]# kubectl describe pods -n kube-ops
# 查看分配端口
[root@k8s-master jenkins-master]# kubectl describe pods -n kube-ops

浏览器访问Jenkins,此时访问的就是K8S里的Jenkins了。
安装基本的插件:Git、Pipeline、Extended Choice Parameter。

Jenkins与Kubernetes整合

首先需要在Jenkins里安装Kubernetes插件,才能实现Jenkins和Kubernetes的整合。
来到Manage Jenkins-Manage Node and Clouds-Configure Clouds-Add a new cloud-Kubernetes,名称自定义,Kubernetes地址是一个固定值(https://kubernetes.default.svc.cluster.local),Kubernetes命名空间是上面创建的命名空间,Jenkins地址也是一个固定值(http://jenkins.kube-ops.svc.cluster.local:8080)。
在这里插入图片描述

构建Jenkins-Slave自定义镜像

Jenkins-Master在构建Job的时候,Kubernetes会创建Jenkins-Slave的Pod来完成Job的构建。我们选择运行Jenkins-Slave的镜像为官方推荐镜像:jenkins/jnlp-slave:latest,但是这个镜像里面并没有Maven环境,为了方便使用,我们需要自定义一个新的镜像。
准备apache-maven-3.6.2-bin.tar.gz,Dockerfile,settings.xml。
apache-maven-3.6.2-bin.tar.gz下载即可,Dockerfile最后说,settings.xml是包含阿里云镜像的文件,用于替换。
Dockerfile内容如下:

FROM jenkins/jnlp-slave:latest

MAINTAINER itcast

# 切换到 root 账户进行操作
USER root

# 安装 maven
COPY apache-maven-3.6.2-bin.tar.gz .

RUN tar -zxf apache-maven-3.6.2-bin.tar.gz && \
    mv apache-maven-3.6.2 /usr/local && \
    rm -f apache-maven-3.6.2-bin.tar.gz && \
    ln -s /usr/local/apache-maven-3.6.2/bin/mvn /usr/bin/mvn && \
    ln -s /usr/local/apache-maven-3.6.2 /usr/local/apache-maven && \
    mkdir -p /usr/local/apache-maven/repo

# 替换settings.xml
COPY settings.xml /usr/local/apache-maven/conf/settings.xml

USER jenkins

新建jenkins-slave文件夹,并把apache-maven-3.6.2-bin.tar.gz,Dockerfile,settings.xml放进来。

# 构建镜像
[root@k8s-master jenkins-slave]# docker build -t jenkins-slave-maven:latest .
# 给镜像打标签
[root@k8s-master jenkins-slave]# docker tag jenkins-slave-maven:latest 192.168.216.123:85/library/jenkins-slavemaven:latest
# 登录Harbor仓库
[root@k8s-master jenkins-slave]# docker login 192.168.216.123:85
# 推送到Harbor中央仓库
[root@k8s-master jenkins-slave]# docker push 192.168.216.123:85/library/jenkins-slave-maven:latest

测试Jenkins-Slave是否可以创建

新建一个Pipeline Project,编写Pipeline Script,一个例子,构建测试。

def git_address = "http://192.168.66.100:82/itheima_group/tensquare_back_cluster.git"
def git_auth = "9d9a2707-eab7-4dc9-b106-e52f329cbc95"
//创建一个Pod的模板,label为jenkins-slave
podTemplate(label: 'jenkins-slave', cloud: 'kubernetes', containers: [
    containerTemplate(
        name: 'jnlp',
        image: "192.168.66.102:85/library/jenkins-slave-maven:latest"
    )
])
//引用jenkins-slave的pod模块来构建Jenkins-Slave的pod
node("jenkins-slave") {
    stage('拉取代码') {
        checkout([$class: 'GitSCM', branches: [[name: 'master']], userRemoteConfigs: [[credentialsId: "${git_auth}", url: "${git_address}"]]])
    }
}

Jenkins+Kubernetes+Docker完成微服务持续集成

拉取代码,构建镜像

创建NFS共享目录/opt/nfs/maven,修改/etc/exports文件,给maven目录设置权限,systemctl restart nfs重启NFS服务,创建项目编写Pipeline Script。

def git_address = "http://192.168.66.100:82/itheima_group/tensquare_back_cluster.git"
def git_auth = "9d9a2707-eab7-4dc9-b106-e52f329cbc95"
//构建版本的名称
def tag = "latest"
//Harbor私服地址
def harbor_url = "192.168.66.102:85"
//Harbor的项目名称
def harbor_project_name = "tensquare"
//Harbor的凭证
def harbor_auth = "71eff071-ec17-4219-bae1-5d0093e3d060"
podTemplate(label: 'jenkins-slave', cloud: 'kubernetes', containers: [
    containerTemplate(
        name: 'jnlp',
        image: "192.168.66.102:85/library/jenkins-slave-maven:latest"
    ),
    containerTemplate(
        name: 'docker',
        image: "docker:stable",
        ttyEnabled: true,
        command: 'cat'
    ),
], volumes: [
    hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock'),
    nfsVolume(mountPath: '/usr/local/apache-maven/repo', serverAddress: '192.168.66.101', serverPath: '/opt/nfs/maven'),
])
node("jenkins-slave") {
    // 第一步
    stage('拉取代码') {
        checkout([$class: 'GitSCM', branches: [[name: '${branch}']], userRemoteConfigs: [[credentialsId: "${git_auth}", url: "${git_address}"]]])
    }
    // 第二步
    stage('代码编译') {
    //编译并安装公共工程
        sh "mvn -f tensquare_common clean install"
    }
    // 第三步
    stage('构建镜像,部署项目') {
        //把选择的项目信息转为数组
        def selectedProjects = "${project_name}".split(',')
        for (int i = 0; i < selectedProjects.size(); i++) {
            //取出每个项目的名称和端口
            def currentProject = selectedProjects[i];
            //项目名称
            def currentProjectName = currentProject.split('@')[0]
            //项目启动端口
            def currentProjectPort = currentProject.split('@')[1]
            //定义镜像名称
            def imageName = "${currentProjectName}:${tag}"
            //编译,构建本地镜像
            sh "mvn -f ${currentProjectName} clean package dockerfile:build "
            container('docker') {
                //给镜像打标签
                sh "docker tag ${imageName} ${harbor_url}/${harbor_project_name}/${imageName}"
                //登录Harbor,并上传镜像
                withCredentials([usernamePassword(credentialsId: "${harbor_auth}", passwordVariable: 'password', usernameVariable: 'username')]) {
                    //登录
                    sh "docker login -u ${username} -p ${password} ${harbor_url}"
                    //上传镜像
                    sh "docker push ${harbor_url}/${harbor_project_name}/${imageName}"
                }
                //删除本地镜像
                sh "docker rmi -f ${imageName}"
                sh "docker rmi -f ${harbor_url}/${harbor_project_name}/${imageName}"
            }
        }
    }
}

在构建时候,可能会有文件权限问题,使用如下命令解决。

chown -R jenkins:jenkins /opt/nfs/maven
chmod -R 777 /opt/nfs/maven
chmod 777 /var/run/docker.sock

微服务部署到K8S

修改Eureka微服务application.yml

server:
  port: ${PORT:10086}
spring:
  application:
    name: eureka
eureka:
  server:
    # 续期时间,即扫描失效服务的间隔时间(缺省为60*1000ms)
    eviction-interval-timer-in-ms: 5000
    enable-self-preservation: false
    use-read-only-response-cache: false
  client:
    # eureka client间隔多久去拉取服务注册信息 默认30s
    registry-fetch-interval-seconds: 5
    serviceUrl:
      defaultZone: ${EUREKA_SERVER:http://127.0.0.1:${server.port}/eureka/}
  instance:
    # 心跳间隔时间,即发送一次心跳之后,多久在发起下一次(缺省为30s)
    lease-renewal-interval-in-seconds: 5
    # 在收到一次心跳之后,等待下一次心跳的空档时间,大于心跳间隔即可,即服务续约到期时间(缺省为90s)
    lease-expiration-duration-in-seconds: 10
    instance-id: ${EUREKA_INSTANCE_HOSTNAME:${spring.application.name}}:${server.port}@${random.long(1000000,9999999)}
    hostname: ${EUREKA_INSTANCE_HOSTNAME:${spring.application.name}}

其他微服务的Eureka注册地址也要跟着变动。

# Eureka配置
eureka:
  client:
    serviceUrl:
      defaultZone: http://eureka-0.eureka:10086/eureka/,http://eureka1.eureka:10086/eureka/ # Eureka访问地址
  instance:
    preferIpAddress: true

安装Kubernetes Continuous Deploy插件,修改Pipeline Script。

def deploy_image_name = "${harbor_url}/${harbor_project_name}/${imageName}"
//部署到K8S
sh """
    sed -i 's#\$IMAGE_NAME#${deploy_image_name}#'
    ${currentProjectName}/deploy.yml
    sed -i 's#\$SECRET_NAME#${secret_name}#'
    ${currentProjectName}/deploy.yml
"""
kubernetesDeploy configs: "${currentProjectName}/deploy.yml", kubeconfigId: "${k8s_auth}"

在凭证里添加K8S的凭证。

cat /root/.kube/config

生成Docker凭证,用于Kubernetes到Docker私服拉取镜像。

# 登录Harbor
docker login -u itcast -p Itcast123 192.168.66.102:85 
# 生成凭证
kubectl create secret docker-registry registry-auth-secret --dockerserver=192.168.66.102:85 --docker-username=itcast --docker-password=Itcast123 --docker-email=itcast@itcast.cn
# 查看密钥
kubectl get secret 

在每个项目下创建deploy.xml文件。
Eureka的deploy.yml

---
apiVersion: v1
kind: Service
metadata:
  name: eureka
  labels:
    app: eureka
spec:
  type: NodePort
  ports:
    - port: 10086
      name: eureka
      targetPort: 10086
  selector:
    app: eureka
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: eureka
spec:
  serviceName: "eureka"
  replicas: 2
  selector:
    matchLabels:
      app: eureka
  template:
    metadata:
      labels:
        app: eureka
    spec:
      imagePullSecrets:
        - name: $SECRET_NAME
      containers:
        - name: eureka
          image: $IMAGE_NAME
          ports:
            - containerPort: 10086
          env:
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: EUREKA_SERVER
                value: "http://eureka-0.eureka:10086/eureka/,http://eureka1.eureka:10086/eureka/"
            - name: EUREKA_INSTANCE_HOSTNAME
                value: ${MY_POD_NAME}.eureka
  podManagementPolicy: "Parallel"

其他项目的deploy.yml要把名称和端口号改掉,比如,zuul微服务,就把eureka换成zuul,并把端口号换掉。项目构建后查看服务创建情况。

kubectl get pods -owide
kubectl get service
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值