目录
一.使用ProcessOn设计了整个集群的架构,规划好服务器的IP地址,使用kubeadm安装k8s单master的集群环境(1个master+2个node节点)。
二.部署ansible完成相关软件的自动化运维工作,部署防火墙服务器,部署堡垒机。
三.部署nfs服务器,为整个web集群提供数据,让所有的web业务pod都去访问,通过pv、pvc和卷挂载实现。
四.构建CI/CD环境,部署gitlab,Jenkins,harbor实现相关的代码发布,镜像制作,数据备份等流水线工作。
五.将自己用go开发的web接口系统制作成镜像,部署到k8s里作为web应用;采用HPA技术,当cpu使用率达到50%的时候,进行水平扩缩,最小20个业务pod,最多40个业务pod。
七.使用探针(liveness、readiness、startup)的(httpget、exec)方法对web业务pod进行监控,一旦出现问题马上重启,增强业务pod的可靠性。
八.使用ingress给web业务做负载均衡,使用dashboard对整个集群资源进行掌控。
九.安装zabbix和promethues对整个集群资源(cpu,内存,网络带宽,web服务,数据库服务,磁盘IO等)进行监控。
十.使用测试软件ab对整个k8s集群和相关的服务器进行压力测试。
项目架构图
项目描述
模拟公司的web业务,部署k8s,web,MySQL,nfs,harbor,zabbix,Prometheus,gitlab,Jenkins,ansible环境,保障web业务的高可用,达到一个高负载的生产环境。
项目环境
CentOS 7.9,ansible 2.9.27,Docker 20.10.6,Docker Compose 2.18.1,Kubernetes 1.20.6,Calico 3.23,Harbor 2.4.1,nfs v4,metrics-server 0.6.0,ingress-nginx-controllerv1.1.0,kube-webhook-certgen-v1.1.0,MySQL 5.7.42,Dashboard v2.5.0,Prometheus 2.34.0,zabbix 5.0,Grafana 10.0.0,jenkinsci/blueocean,Gitlab-16.0.4-jh。
环境准备
10台全新的Linux服务器,关闭firewall和seLinux,配置静态ip地址,修改主机名,添加hosts解析
IP地址规划
server | ip |
k8smaster | 192.168.2.104 |
k8snode1 | 192.168.2.111 |
k8snode2 | 192.168.2.112 |
ansibe | 192.168.2.119 |
nfs | 192.168.2.121 |
gitlab | 192.168.2.124 |
harbor | 192.168.2.106 |
zabbix | 192.168.2.117 |
firewalld | 192.168.2.141 |
Bastionhost | 192.168.2.140 |
关闭selinux和firewall
-
# 防火墙并且设置防火墙开启不启动
-
service firewalld stop && systemctl
disable firewalld
-
-
# 临时关闭seLinux
-
setenforce 0
-
-
# 永久关闭seLinux
-
sed -i
's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
-
-
[root@k8smaster ~]
# service firewalld stop
-
Redirecting to /bin/systemctl stop firewalld.service
-
[root@k8smaster ~]
# systemctl disable firewalld
-
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
-
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
-
[root@k8smaster ~]
# reboot
-
[root@k8smaster ~]
# getenforce
-
Disabled
配置静态ip地址
-
cd /etc/sysconfig/network-scripts/
-
vim ifcfg-ens33
-
-
TYPE=
"Ethernet"
-
BOOTPROTO=
"static"
-
DEVICE=
"ens33"
-
NAME=
"ens33"
-
ONBOOT=
"yes"
-
IPADDR=
"192.168.2.104"
-
PREFIX=24
-
GATEWAY=
"192.168.2.1"
-
DNS1=114.114.114.114
-
-
TYPE=
"Ethernet"
-
BOOTPROTO=
"static"
-
DEVICE=
"ens33"
-
NAME=
"ens33"
-
ONBOOT=
"yes"
-
IPADDR=
"192.168.2.111"
-
PREFIX=24
-
GATEWAY=
"192.168.2.1"
-
DNS1=114.114.114.114
-
-
TYPE=
"Ethernet"
-
BOOTPROTO=
"static"
-
DEVICE=
"ens33"
-
NAME=
"ens33"
-
ONBOOT=
"yes"
-
IPADDR=
"192.168.2.112"
-
PREFIX=24
-
GATEWAY=
"192.168.2.1"
-
DNS1=114.114.114.114
修改主机名
-
hostnamcectl set-hostname k8smaster
-
hostnamcectl set-hostname k8snode1
-
hostnamcectl set-hostname k8snode2
-
-
#切换用户,重新加载环境
-
su - root
-
[root@k8smaster ~]
#
-
[root@k8snode1 ~]
#
-
[root@k8snode2 ~]
#
升级系统(可做可不做)
yum update -y
添加hosts解析
vim /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.2.104 k8smaster 192.168.2.111 k8snode1 192.168.2.112 k8snode2
项目步骤
一.使用ProcessOn设计了整个集群的架构,规划好服务器的IP地址,使用kubeadm安装k8s单master的集群环境(1个master+2个node节点)。
-
# 1.互相之间建立免密通道
-
ssh-keygen
# 一路回车
-
-
ssh-copy-id k8smaster
-
ssh-copy-id k8snode1
-
ssh-copy-id k8snode2
-
-
# 2.关闭交换分区(Kubeadm初始化的时候会检测)
-
# 临时关闭:swapoff -a
-
# 永久关闭:注释swap挂载,给swap这行开头加一下注释
-
[root@k8smaster ~]
# cat /etc/fstab
-
-
#
-
# /etc/fstab
-
# Created by anaconda on Thu Mar 23 15:22:20 2023
-
#
-
# Accessible filesystems, by reference, are maintained under '/dev/disk'
-
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
-
#
-
/dev/mapper/centos-root / xfs defaults 0 0
-
UUID=00236222-82bd-4c15-9c97-e55643144ff3 /boot xfs defaults 0 0
-
/dev/mapper/centos-home /home xfs defaults 0 0
-
#/dev/mapper/centos-swap swap swap defaults 0 0
-
-
# 3.加载相关内核模块
-
modprobe br_netfilter
-
-
echo
"modprobe br_netfilter" >> /etc/profile
-
-
cat > /etc/sysctl.d/k8s.conf <<
EOF
-
net.bridge.bridge-nf-call-ip6tables = 1
-
net.bridge.bridge-nf-call-iptables = 1
-
net.ipv4.ip_forward = 1
-
EOF
-
-
#重新加载,使配置生效
-
sysctl -p /etc/sysctl.d/k8s.conf
-
-
-
# 为什么要执行modprobe br_netfilter?
-
# "modprobe br_netfilter"命令用于在Linux系统中加载br_netfilter内核模块。这个模块是Linux内# 核中的一个网络桥接模块,它允许管理员使用iptables等工具对桥接到同一网卡的流量进行过滤和管理。
-
# 因为要使用Linux系统作为路由器或防火墙,并且需要对来自不同网卡的数据包进行过滤、转发或NAT操作。
-
-
# 为什么要开启net.ipv4.ip_forward = 1参数?
-
# 要让Linux系统具有路由转发功能,需要配置一个Linux的内核参数net.ipv4.ip_forward。这个参数指# 定了Linux系统当前对路由转发功能的支持情况;其值为0时表示禁止进行IP转发;如果是1,则说明IP转发# 功能已经打开。
-
-
# 4.配置阿里云的repo源
-
yum install -y yum-utils
-
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
-
-
yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet ipvsadm
-
-
# 5.配置安装k8s组件需要的阿里云的repo源
-
[root@k8smaster ~]
# vim /etc/yum.repos.d/kubernetes.repo
-
[kubernetes]
-
name=Kubernetes
-
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
-
enabled=1
-
gpgcheck=0
-
-
# 6.配置时间同步
-
[root@k8smaster ~]
# crontab -e
-
* */1 * * * /usr/sbin/ntpdate cn.pool.ntp.org
-
-
#重启crond服务
-
[root@k8smaster ~]
# service crond restart
-
-
# 7.安装docker服务
-
yum install docker-ce-20.10.6 -y
-
-
-
# 启动docker,设置开机自启
-
systemctl start docker && systemctl
enable docker.service
-
-
# 8.配置docker镜像加速器和驱动
-
vim /etc/docker/daemon.json
-
-
{
-
"registry-mirrors":[
"https://rsbud4vc.mirror.aliyuncs.com",
"https://registry.docker-cn.com",
"https://docker.mirrors.ustc.edu.cn",
"https://dockerhub.azk8s.cn",
"http://hub-mirror.c.163.com"],
-
"exec-opts": [
"native.cgroupdriver=systemd"]
-
}
-
-
# 重新加载配置,重启docker服务
-
systemctl daemon-reload && systemctl restart docker
-
-
# 9.安装初始化k8s需要的软件包
-
yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6
-
-
# 设置kubelet开机启动
-
systemctl
enable kubelet
-
-
#注:每个软件包的作用
-
#Kubeadm: kubeadm是一个工具,用来初始化k8s集群的
-
#kubelet: 安装在集群所有节点上,用于启动Pod的
-
#kubectl: 通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件
-
-
# 10.kubeadm初始化k8s集群
-
# 把初始化k8s集群需要的离线镜像包上传到k8smaster、k8snode1、k8snode2机器上,然后解压
-
docker load -i k8simage-1-20-6.tar.gz
-
-
# 把文件远程拷贝到node节点
-
root@k8smaster ~]
# scp k8simage-1-20-6.tar.gz root@k8snode1:/root
-
root@k8smaster ~]
# scp k8simage-1-20-6.tar.gz root@k8snode2:/root
-
-
# 查看镜像
-
[root@k8snode1 ~]
# docker images
-
REPOSITORY TAG IMAGE ID CREATED SIZE
-
registry.aliyuncs.com/google_containers/kube-proxy v1.20.6 9a1ebfd8124d 2 years ago 118MB
-
registry.aliyuncs.com/google_containers/kube-scheduler v1.20.6 b93ab2ec4475 2 years ago 47.3MB
-
registry.aliyuncs.com/google_containers/kube-controller-manager v1.20.6 560dd11d4550 2 years ago 116MB
-
registry.aliyuncs.com/google_containers/kube-apiserver v1.20.6 b05d611c1af9 2 years ago 122MB
-
calico/pod2daemon-flexvol v3.18.0 2a22066e9588 2 years ago 21.7MB
-
calico/node v3.18.0 5a7c4970fbc2 2 years ago 172MB
-
calico/cni v3.18.0 727de170e4ce 2 years ago 131MB
-
calico/kube-controllers v3.18.0 9a154323fbf7 2 years ago 53.4MB
-
registry.aliyuncs.com/google_containers/etcd 3.4.13-0 0369cf4303ff 2 years ago 253MB
-
registry.aliyuncs.com/google_containers/coredns 1.7.0 bfe3a36ebd25 3 years ago 45.2MB
-
registry.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 3 years ago 683kB
-
-
# 11.使用kubeadm初始化k8s集群
-
kubeadm config
print init-defaults > kubeadm.yaml
-
-
[root@k8smaster ~]
# vim kubeadm.yaml
-
apiVersion: kubeadm.k8s.io/v1beta2
-
bootstrapTokens:
-
-
groups:
-
- system:bootstrappers:kubeadm:default-node-token
-
token: abcdef.0123456789abcdef
-
ttl: 24h0m0s
-
usages:
-
- signing
-
- authentication
-
kind: InitConfiguration
-
localAPIEndpoint:
-
advertiseAddress: 192.168.2.104
#控制节点的ip
-
bindPort: 6443
-
nodeRegistration:
-
criSocket: /var/run/dockershim.sock
-
name: k8smaster
#控制节点主机名
-
taints:
-
- effect: NoSchedule
-
key: node-role.kubernetes.io/master
-
---
-
apiServer:
-
timeoutForControlPlane: 4m0s
-
apiVersion: kubeadm.k8s.io/v1beta2
-
certificatesDir: /etc/kubernetes/pki
-
clusterName: kubernetes
-
controllerManager: {}
-
dns:
-
type: CoreDNS
-
etcd:
-
local:
-
dataDir: /var/lib/etcd
-
imageRepository: registry.aliyuncs.com/google_containers
# 需要修改为阿里云的仓库
-
kind: ClusterConfiguration
-
kubernetesVersion: v1.20.6
-
networking:
-
dnsDomain: cluster.local
-
serviceSubnet: 10.96.0.0/12
-
podSubnet: 10.244.0.0/16
#指定pod网段,需要新增加这个
-
scheduler: {}
-
#追加如下几行
-
---
-
apiVersion: kubeproxy.config.k8s.io/v1alpha1
-
kind: KubeProxyConfiguration
-
mode: ipvs
-
---
-
apiVersion: kubelet.config.k8s.io/v1beta1
-
kind: KubeletConfiguration
-
cgroupDriver: systemd
-
-
# 12.基于kubeadm.yaml文件初始化k8s
-
[root@k8smaster ~]
# kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification
-
-
mkdir -p
$HOME/.kube
-
sudo
cp -i /etc/kubernetes/admin.conf
$HOME/.kube/config
-
sudo
chown $(
id -u):$(
id -g)
$HOME/.kube/config
-
-
kubeadm
join 192.168.2.104:6443 --token abcdef.0123456789abcdef \
-
--discovery-token-ca-cert-hash sha256:83421a7d1baa62269508259b33e6563e45fbeb9139a9c214cbe9fc107f07cb4c
-
-
# 13.扩容k8s集群-添加工作节点
-
[root@k8snode1 ~]
# kubeadm join 192.168.2.104:6443 --token abcdef.0123456789abcdef \
-
--discovery-token-ca-cert-hash sha256:83421a7d1baa62269508259b33e6563e45fbeb9139a9c214cbe9fc107f07cb4c
-
-
[root@k8snode2 ~]
# kubeadm join 192.168.2.104:6443 --token abcdef.0123456789abcdef \
-
--discovery-token-ca-cert-hash sha256:83421a7d1baa62269508259b33e6563e45fbeb9139a9c214cbe9fc107f07cb4c
-
-
# 14.在k8smaster上查看集群节点状况
-
[root@k8smaster ~]
# kubectl get nodes
-
NAME STATUS ROLES AGE VERSION
-
k8smaster NotReady control-plane,master 2m49s v1.20.6
-
k8snode1 NotReady <none> 19s v1.20.6
-
k8snode2 NotReady <none> 14s v1.20.6
-
-
# 15.k8snode1,k8snode2的ROLES角色为空,<none>就表示这个节点是工作节点。
-
可以把k8snode1,k8snode2的ROLES变成work
-
[root@k8smaster ~]
# kubectl label node k8snode1 node-role.kubernetes.io/worker=worker
-
node/k8snode1 labeled
-
-
[root@k8smaster ~]
# kubectl label node k8snode2 node-role.kubernetes.io/worker=worker
-
node/k8snode2 labeled
-
[root@k8smaster ~]
# kubectl get nodes
-
NAME STATUS ROLES AGE VERSION
-
k8smaster NotReady control-plane,master 2m43s v1.20.6
-
k8snode1 NotReady worker 2m15s v1.20.6
-
k8snode2 NotReady worker 2m11s v1.20.6
-
# 注意:上面状态都是NotReady状态,说明没有安装网络插件
-
-
# 16.安装kubernetes网络组件-Calico
-
# 上传calico.yaml到k8smaster上,使用yaml文件安装calico网络插件 。
-
wget https://docs.projectcalico.org/v3.23/manifests/calico.yaml --no-check-certificate
-
-
[root@k8smaster ~]
# kubectl apply -f calico.yaml
-
configmap/calico-config created
-
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
-
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
-
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
-
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
-
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
-
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
-
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
-
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
-
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
-
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
-
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
-
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
-
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
-
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
-
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
-
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
-
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
-
clusterrole.rbac.authorization.k8s.io/calico-node created
-
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
-
daemonset.apps/calico-node created
-
serviceaccount/calico-node created
-
deployment.apps/calico-kube-controllers created
-
serviceaccount/calico-kube-controllers created
-
poddisruptionbudget.policy/calico-kube-controllers created
-
-
# 再次查看集群状态
-
[root@k8smaster ~]
# kubectl get nodes
-
NAME STATUS ROLES AGE VERSION
-
k8smaster Ready control-plane,master 5m57s v1.20.6
-
k8snode1 Ready worker 3m27s v1.20.6
-
k8snode2 Ready worker 3m22s v1.20.6
-
# STATUS状态是Ready,说明k8s集群正常运行了
二.部署ansible完成相关软件的自动化运维工作,部署防火墙服务器,部署堡垒机。
-
# 1.建立免密通道 在ansible主机上生成密钥对
-
[root@ansible ~]
# ssh-keygen -t ecdsa
-
Generating public/private ecdsa key pair.
-
Enter file
in
which to save the key (/root/.ssh/id_ecdsa):
-
Created directory
'/root/.ssh'.
-
Enter passphrase (empty
for no passphrase):
-
Enter same passphrase again:
-
Your identification has been saved
in /root/.ssh/id_ecdsa.
-
Your public key has been saved
in /root/.ssh/id_ecdsa.pub.
-
The key fingerprint is:
-
SHA256:FNgCSDVk6i3foP88MfekA2UzwNn6x3kyi7V+mLdoxYE root@ansible
-
The key
's randomart image is:
-
+---[ECDSA 256]---+
-
| ..+*o =. |
-
| .o .* o. |
-
| . +. . |
-
| . . ..= E . |
-
| o o +S+ o . |
-
| + o+ o O + |
-
| . . .= B X |
-
| . .. + B.o |
-
| ..o. +oo.. |
-
+----[SHA256]-----+
-
[root@ansible ~]# cd /root/.ssh
-
[root@ansible .ssh]# ls
-
id_ecdsa id_ecdsa.pub
-
-
# 2.上传公钥到所有服务器的root用户家目录下
-
# 所有服务器上开启ssh服务 ,开放22号端口,允许root用户登录
-
-
# 上传公钥到k8smaster
-
[root@ansible .ssh]# ssh-copy-id -i id_ecdsa.pub root@192.168.2.104
-
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "id_ecdsa.pub"
-
The authenticity of host '192.168.2.104 (192.168.2.104)
' can't be established.
-
ECDSA key fingerprint is SHA256:l7LRfACELrI6mU2XvYaCz+sDBWiGkYnAecPgnxJxdvE.
-
ECDSA key fingerprint is MD5:b6:f7:e1:c5:23:24:5c:16:1f:66:42:ba:80:a6:3c:fd.
-
Are you sure you want to
continue connecting (
yes/no)?
yes
-
/usr/bin/ssh-copy-id: INFO: attempting to
log
in with the new key(s), to filter out any that are already installed
-
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed --
if you are prompted now it is to install the new keys
-
root@192.168.2.104
's password:
-
-
Number of key(s) added: 1
-
-
Now try logging into the machine, with: "ssh 'root@192.168.2.104
'"
-
and check to make sure that only the key(s) you wanted were added.
-
-
# 上传公钥到k8snode
-
[root@ansible .ssh]# ssh-copy-id -i id_ecdsa.pub root@192.168.2.111
-
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "id_ecdsa.pub"
-
The authenticity of host '192.168.2.111 (192.168.2.111)
' can't be established.
-
ECDSA key fingerprint is SHA256:l7LRfACELrI6mU2XvYaCz+sDBWiGkYnAecPgnxJxdvE.
-
ECDSA key fingerprint is MD5:b6:f7:e1:c5:23:24:5c:16:1f:66:42:ba:80:a6:3c:fd.
-
Are you sure you want to
continue connecting (
yes/no)?
yes
-
/usr/bin/ssh-copy-id: INFO: attempting to
log
in with the new key(s), to filter out any that are already installed
-
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed --
if you are prompted now it is to install the new keys
-
root@192.168.2.111
's password:
-
-
Number of key(s) added: 1
-
-
Now try logging into the machine, with: "ssh 'root@192.168.2.111
'"
-
and check to make sure that only the key(s) you wanted were added.
-
-
[root@ansible .ssh]# ssh-copy-id -i id_ecdsa.pub root@192.168.2.112
-
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "id_ecdsa.pub"
-
The authenticity of host '192.168.2.112 (192.168.2.112)
' can't be established.
-
ECDSA key fingerprint is SHA256:l7LRfACELrI6mU2XvYaCz+sDBWiGkYnAecPgnxJxdvE.
-
ECDSA key fingerprint is MD5:b6:f7:e1:c5:23:24:5c:16:1f:66:42:ba:80:a6:3c:fd.
-
Are you sure you want to
continue connecting (
yes/no)?
yes
-
/usr/bin/ssh-copy-id: INFO: attempting to
log
in with the new key(s), to filter out any that are already installed
-
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed --
if you are prompted now it is to install the new keys
-
root@192.168.2.112
's password:
-
-
Number of key(s) added: 1
-
-
Now try logging into the machine, with: "ssh 'root@192.168.2.112
'"
-
and check to make sure that only the key(s) you wanted were added.
-
-
# 验证是否实现免密码密钥认证
-
[root@ansible .ssh]# ssh root@192.168.2.121
-
Last login: Tue Jun 20 10:33:33 2023 from 192.168.2.240
-
[root@nfs ~]# exit
-
登出
-
Connection to 192.168.2.121 closed.
-
[root@ansible .ssh]# ssh root@192.168.2.112
-
Last login: Tue Jun 20 10:34:18 2023 from 192.168.2.240
-
[root@k8snode2 ~]# exit
-
登出
-
Connection to 192.168.2.112 closed.
-
[root@ansible .ssh]#
-
-
# 3.安装ansible,在管理节点上
-
# 目前,只要机器上安装了 Python 2.6 或 Python 2.7 (windows系统不可以做控制主机),都可以运行Ansible.
-
[root@ansible .ssh]# yum install epel-release -y
-
[root@ansible .ssh]# yum install ansible -y
-
-
[root@ansible ~]# ansible --version
-
ansible 2.9.27
-
config file = /etc/ansible/ansible.cfg
-
configured module search path = [u'/root/.ansible/plugins/modules
', u'/usr/share/ansible/plugins/modules
']
-
ansible python module location = /usr/lib/python2.7/site-packages/ansible
-
executable location = /usr/bin/ansible
-
python version = 2.7.5 (default, Oct 14 2020, 14:45:30) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
-
-
# 4.编写主机清单
-
[root@ansible .ssh]# cd /etc/ansible
-
[root@ansible ansible]# ls
-
ansible.cfg hosts roles
-
[root@ansible ansible]# vim hosts
-
## 192.168.1.110
-
[k8smaster]
-
192.168.2.104
-
-
[k8snode]
-
192.168.2.111
-
192.168.2.112
-
-
[nfs]
-
192.168.2.121
-
-
[gitlab]
-
192.168.2.124
-
-
[harbor]
-
192.168.2.106
-
-
[zabbix]
-
192.168.2.117
-
-
# 测试
-
[root@ansible ansible]# ansible all -m shell -a "ip add"
部署堡垒机
仅需两步快速安装 JumpServer:
准备一台 2核4G (最低)且可以访问互联网的 64 位 Linux 主机;
以 root 用户执行如下命令一键安装 JumpServer。
curl -sSL https://resource.fit2cloud.com/jumpserver/jumpserver/releases/latest/download/quick_start.sh | bash
部署firewall服务器
-
# 关闭虚拟机,增加一块网卡(ens37)
-
-
# 编写脚本实现SNAT_DNAT功能
-
[root@firewalld ~]
# cat snat_dnat.sh
-
#!/bin/bash
-
-
# open route
-
echo 1 >/proc/sys/net/ipv4/ip_forward
-
-
# stop firewall
-
systemctl stop firewalld
-
systemctl
disable firewalld
-
-
# clear iptables rule
-
iptables -F
-
iptables -t nat -F
-
-
# enable snat
-
iptables -t nat -A POSTROUTING -s 192.168.2.0/24 -o ens33 -j MASQUERADE
-
#内网来的192.168.2.0网段过来的ip地址全部伪装(替换)为ens33接口的公网ip地址,好处就是不需要考虑ens33接口的ip地址是多少,你是哪个ip地址,我就伪装成哪个ip地址
-
-
-
# enable dnat
-
iptables -t nat -A PREROUTING -d 192.168.0.169 -i ens33 -p tcp --dport 2233 -j DNAT --to-destination 192.168.2.104:22
-
-
# open web 80
-
iptables -t nat -A PREROUTING -d 192.168.0.169 -i ens33 -p tcp --dport 80 -j DNAT --to-destination 192.168.2.104:80
-
-
-
# web服务器上操作
-
[root@k8smaster ~]
# cat open_app.sh
-
#!/bin/bash
-
-
# open ssh
-
iptables -t filter -A INPUT -p tcp --dport 22 -j ACCEPT
-
-
# open dns
-
iptables -t filter -A INPUT -p udp --dport 53 -s 192.168.2.0/24 -j ACCEPT
-
-
# open dhcp
-
iptables -t filter -A INPUT -p udp --dport 67 -j ACCEPT
-
-
# open http/https
-
iptables -t filter -A INPUT -p tcp --dport 80 -j ACCEPT
-
iptables -t filter -A INPUT -p tcp --dport 443 -j ACCEPT
-
-
# open mysql
-
iptables -t filter -A INPUT -p tcp --dport 3306 -j ACCEPT
-
-
# default policy DROP
-
iptables -t filter -P INPUT DROP
-
-
# drop icmp request
-
iptables -t filter -A INPUT -p icmp --icmp-type 8 -j DROP
三.部署nfs服务器,为整个web集群提供数据,让所有的web业务pod都去访问,通过pv、pvc和卷挂载实现。
-
# 1.搭建好nfs服务器
-
[root@nfs ~]
# yum install nfs-utils -y
-
-
# 建议k8s集群内的所有的节点都安装nfs-utils软件,因为节点服务器里创建卷需要支持nfs网络文件系统
-
[root@k8smaster ~]
# yum install nfs-utils -y
-
-
[root@k8smaster ~]
# service nfs restart
-
Redirecting to /bin/systemctl restart nfs.service
-
-
[root@k8smaster ~]
# ps aux |grep nfs
-
root 87368 0.0 0.0 0 0 ? S< 16:49 0:00 [nfsd4_callbacks]
-
root 87374 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
-
root 87375 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
-
root 87376 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
-
root 87377 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
-
root 87378 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
-
root 87379 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
-
root 87380 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
-
root 87381 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
-
root 96648 0.0 0.0 112824 988 pts/0 S+ 17:02 0:00 grep --color=auto nfs
-
-
# 2.设置共享目录
-
[root@nfs ~]
# vim /etc/exports
-
[root@nfs ~]
# cat /etc/exports
-
/web 192.168.2.0/24(rw,no_root_squash,
sync)
-
-
# 3.新建共享目录和index.html
-
[root@nfs ~]
# mkdir /web
-
[root@nfs ~]
# cd /web
-
[root@nfs web]
# echo "welcome to changsha" >index.html
-
[root@nfs web]
# ls
-
index.html
-
[root@nfs web]
# ll -d /web
-
drwxr-xr-x. 2 root root 24 6月 18 16:46 /web
-
-
# 4.刷新nfs或者重新输出共享目录
-
[root@nfs ~]
# exportfs -r #输出所有共享目录
-
[root@nfs ~]
# exportfs -v #显示输出的共享目录
-
/web 192.168.2.0/24(
sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
-
-
# 5.重启nfs服务并且设置nfs开机自启
-
[root@nfs web]
# systemctl restart nfs && systemctl enable nfs
-
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
-
-
# 6.在k8s集群里的任意一个节点服务器上测试能否挂载nfs服务器共享的目录
-
[root@k8snode1 ~]
# mkdir /node1_nfs
-
[root@k8snode1 ~]
# mount 192.168.2.121:/web /node1_nfs
-
您在 /var/spool/mail/root 中有新邮件
-
[root@k8snode1 ~]
# df -Th|grep nfs
-
192.168.2.121:/web nfs4 17G 1.5G 16G 9% /node1_nfs
-
-
# 7.取消挂载
-
[root@k8snode1 ~]
# umount /node1_nfs
-
-
# 8.创建pv使用nfs服务器上的共享目录
-
[root@k8smaster pv]
# vim nfs-pv.yml
-
[root@k8smaster pv]
# cat nfs-pv.yml
-
apiVersion: v1
-
kind: PersistentVolume
-
metadata:
-
name: pv-web
-
labels:
-
type: pv-web
-
spec:
-
capacity:
-
storage: 10Gi
-
accessModes:
-
- ReadWriteMany
-
storageClassName: nfs
# pv对应的名字
-
nfs:
-
path:
"/web"
# nfs共享的目录
-
server: 192.168.2.121
# nfs服务器的ip地址
-
readOnly:
false
# 访问模式
-
-
[root@k8smaster pv]
# kubectl apply -f nfs-pv.yml
-
persistentvolume/pv-web created
-
[root@k8smaster pv]
# kubectl get pv
-
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
-
pv-web 10Gi RWX Retain Available nfs 5s
-
-
# 9.创建pvc使用pv
-
[root@k8smaster pv]
# vim nfs-pvc.yml
-
[root@k8smaster pv]
# cat nfs-pvc.yml
-
apiVersion: v1
-
kind: PersistentVolumeClaim
-
metadata:
-
name: pvc-web
-
spec:
-
accessModes:
-
- ReadWriteMany
-
resources:
-
requests:
-
storage: 1Gi
-
storageClassName: nfs
#使用nfs类型的pv
-
-
[root@k8smaster pv]
# kubectl apply -f pvc-nfs.yaml
-
persistentvolumeclaim/sc-nginx-pvc created
-
[root@k8smaster pv]
# kubectl apply -f nfs-pvc.yml
-
persistentvolumeclaim/pvc-web created
-
-
[root@k8smaster pv]
# kubectl get pvc
-
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
-
pvc-web Bound pv-web 10Gi RWX nfs 6s
-
-
# 10.创建pod使用pvc
-
[root@k8smaster pv]
# vim nginx-deployment.yaml
-
[root@k8smaster pv]
# cat nginx-deployment.yaml
-
apiVersion: apps/v1
-
kind: Deployment
-
metadata:
-
name: nginx-deployment
-
labels:
-
app: nginx
-
spec:
-
replicas: 3
-
selector:
-
matchLabels:
-
app: nginx
-
template:
-
metadata:
-
labels:
-
app: nginx
-
spec:
-
volumes:
-
- name: sc-pv-storage-nfs
-
persistentVolumeClaim:
-
claimName: pvc-web
-
containers:
-
- name: sc-pv-container-nfs
-
image: nginx
-
imagePullPolicy: IfNotPresent
-
ports:
-
- containerPort: 80
-
name:
"http-server"
-
volumeMounts:
-
- mountPath:
"/usr/share/nginx/html"
-
name: sc-pv-storage-nfs
-
-
[root@k8smaster pv]
# kubectl apply -f nginx-deployment.yaml
-
deployment.apps/nginx-deployment created
-
-
[root@k8smaster pv]
# kubectl get pod -o wide
-
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
-
nginx-deployment-76855d4d79-2q4vh 1/1 Running 0 42s 10.244.185.194 k8snode2 <none> <none>
-
nginx-deployment-76855d4d79-mvgq7 1/1 Running 0 42s 10.244.185.195 k8snode2 <none> <none>
-
nginx-deployment-76855d4d79-zm8v4 1/1 Running 0 42s 10.244.249.3 k8snode1 <none> <none>
-
-
# 11.测试访问
-
[root@k8smaster pv]
# curl 10.244.185.194
-
welcome to changsha
-
[root@k8smaster pv]
# curl 10.244.185.195
-
welcome to changsha
-
[root@k8smaster pv]
# curl 10.244.249.3
-
welcome to changsha
-
-
[root@k8snode1 ~]
# curl 10.244.185.194
-
welcome to changsha
-
[root@k8snode1 ~]
# curl 10.244.185.195
-
welcome to changsha
-
[root@k8snode1 ~]
# curl 10.244.249.3
-
welcome to changsha
-
-
[root@k8snode2 ~]
# curl 10.244.185.194
-
welcome to changsha
-
[root@k8snode2 ~]
# curl 10.244.185.195
-
welcome to changsha
-
[root@k8snode2 ~]
# curl 10.244.249.3
-
welcome to changsha
-
-
# 12.修改内容
-
[root@nfs web]
# echo "hello,world" >> index.html
-
[root@nfs web]
# cat index.html
-
welcome to changsha
-
hello,world
-
-
# 13.再次访问
-
[root@k8snode1 ~]
# curl 10.244.249.3
-
welcome to changsha
-
hello,world
四.构建CI/CD环境,部署gitlab,Jenkins,harbor实现相关的代码发布,镜像制作,数据备份等流水线工作。
1.部署gitlab
-
# 部署gitlab
-
https://gitlab.cn/install/
-
-
[root@localhost ~]
# hostnamectl set-hostname gitlab
-
[root@localhost ~]
# su - root
-
su - root
-
上一次登录:日 6月 18 18:28:08 CST 2023从 192.168.2.240pts/0 上
-
[root@gitlab ~]
# cd /etc/sysconfig/network-scripts/
-
[root@gitlab network-scripts]
# vim ifcfg-ens33
-
[root@gitlab network-scripts]
# service network restart
-
Restarting network (via systemctl): [ 确定 ]
-
[root@gitlab network-scripts]
# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
-
[root@gitlab network-scripts]
# service firewalld stop && systemctl disable firewalld
-
Redirecting to /bin/systemctl stop firewalld.service
-
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
-
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
-
[root@gitlab network-scripts]
# reboot
-
[root@gitlab ~]
# getenforce
-
Disabled
-
-
# 1.安装和配置必须的依赖项
-
yum install -y curl policycoreutils-python openssh-server perl
-
-
# 2.配置极狐GitLab 软件源镜像
-
[root@gitlab ~]
# curl -fsSL https://packages.gitlab.cn/repository/raw/scripts/setup.sh | /bin/bash
-
==> Detected OS centos
-
-
==> Add yum repo file to /etc/yum.repos.d/gitlab-jh.repo
-
-
[gitlab-jh]
-
name=JiHu GitLab
-
baseurl=https://packages.gitlab.cn/repository/el/
$releasever/
-
gpgcheck=0
-
gpgkey=https://packages.gitlab.cn/repository/raw/gpg/public.gpg.key
-
priority=1
-
enabled=1
-
-
==> Generate yum cache
for gitlab-jh
-
-
==> Successfully added gitlab-jh repo. To install JiHu GitLab, run
"sudo yum/dnf install gitlab-jh".
-
-
[root@gitlab ~]
# yum install gitlab-jh -y
-
Thank you
for installing JiHu GitLab!
-
GitLab was unable to detect a valid hostname
for your instance.
-
Please configure a URL
for your JiHu GitLab instance by setting `external_url`
-
configuration
in /etc/gitlab/gitlab.rb file.
-
Then, you can start your JiHu GitLab instance by running the following
command:
-
sudo gitlab-ctl reconfigure
-
-
For a comprehensive list of configuration options please see the Omnibus GitLab readme
-
https://jihulab.com/gitlab-cn/omnibus-gitlab/-/blob/main-jh/README.md
-
-
Help us improve the installation experience,
let us know how we did with a 1 minute survey:
-
https://wj.qq.com/s2/10068464/dc66
-
-
[root@gitlab ~]
# vim /etc/gitlab/gitlab.rb
-
external_url
'http://myweb.first.com'
-
-
[root@gitlab ~]
# gitlab-ctl reconfigure
-
Notes:
-
Default admin account has been configured with following details:
-
Username: root
-
Password: You didn
't opt-in to print initial root password to STDOUT.
-
Password stored to /etc/gitlab/initial_root_password. This file will be cleaned up in first reconfigure run after 24 hours.
-
-
NOTE: Because these credentials might be present in your log files in plain text, it is highly recommended to reset the password following https://docs.gitlab.com/ee/security/reset_user_password.html#reset-your-root-password.
-
-
gitlab Reconfigured!
-
-
# 查看密码
-
[root@gitlab ~]# cat /etc/gitlab/initial_root_password
-
# WARNING: This value is valid only in the following conditions
-
# 1. If provided manually (either via `GITLAB_ROOT_PASSWORD` environment variable or via `gitlab_rails['initial_root_password
']` setting in `gitlab.rb`, it was provided before database was seeded for the first time (usually, the first reconfigure run).
-
# 2. Password hasn't been changed manually, either via UI or via
command line.
-
#
-
# If the password shown here doesn't work, you must reset the admin password following https://docs.gitlab.com/ee/security/reset_user_password.html#reset-your-root-password.
-
-
Password: Al5rgYomhXDz5kNfDl3y8qunrSX334aZZxX5vONJ05s=
-
-
# NOTE: This file will be automatically deleted in the first reconfigure run after 24 hours.
-
-
# 可以登录后修改语言为中文
-
# 用户的profile/preferences
-
-
# 修改密码
-
-
[root@gitlab ~]
# gitlab-rake gitlab:env:info
-
-
System information
-
System:
-
Proxy: no
-
Current User: git
-
Using RVM: no
-
Ruby Version: 3.0.6p216
-
Gem Version: 3.4.13
-
Bundler Version:2.4.13
-
Rake Version: 13.0.6
-
Redis Version: 6.2.11
-
Sidekiq Version:6.5.7
-
Go Version: unknown
-
-
GitLab information
-
Version: 16.0.4-jh
-
Revision: c2ed99db36f
-
Directory: /opt/gitlab/embedded/service/gitlab-rails
-
DB Adapter: PostgreSQL
-
DB Version: 13.11
-
URL: http://myweb.first.com
-
HTTP Clone URL: http://myweb.first.com/some-group/some-project.git
-
SSH Clone URL: git@myweb.first.com:some-group/some-project.git
-
Elasticsearch: no
-
Geo: no
-
Using LDAP: no
-
Using Omniauth:
yes
-
Omniauth Providers:
-
-
GitLab Shell
-
Version: 14.20.0
-
Repository storages:
-
- default: unix:/var/opt/gitlab/gitaly/gitaly.socket
-
GitLab Shell path: /opt/gitlab/embedded/service/gitlab-shell
-
2.部署Jenkins
-
# Jenkins部署到k8s里
-
# 1.安装git软件
-
[root@k8smaster jenkins]
# yum install git -y
-
-
# 2.下载相关的yaml文件
-
[root@k8smaster jenkins]
# git clone https://github.com/scriptcamp/kubernetes-jenkins
-
正克隆到
'kubernetes-jenkins'...
-
remote: Enumerating objects: 16,
done.
-
remote: Counting objects: 100% (7/7),
done.
-
remote: Compressing objects: 100% (7/7),
done.
-
remote: Total 16 (delta 1), reused 0 (delta 0), pack-reused 9
-
Unpacking objects: 100% (16/16),
done.
-
[root@k8smaster jenkins]
# ls
-
kubernetes-jenkins
-
[root@k8smaster jenkins]
# cd kubernetes-jenkins/
-
[root@k8smaster kubernetes-jenkins]
# ls
-
deployment.yaml namespace.yaml README.md serviceAccount.yaml service.yaml volume.yaml
-
-
# 3.创建命名空间
-
[root@k8smaster kubernetes-jenkins]
# cat namespace.yaml
-
apiVersion: v1
-
kind: Namespace
-
metadata:
-
name: devops-tools
-
[root@k8smaster kubernetes-jenkins]
# kubectl apply -f namespace.yaml
-
namespace/devops-tools created
-
-
[root@k8smaster kubernetes-jenkins]
# kubectl get ns
-
NAME STATUS AGE
-
default Active 22h
-
devops-tools Active 19s
-
ingress-nginx Active 139m
-
kube-node-lease Active 22h
-
kube-public Active 22h
-
kube-system Active 22h
-
-
# 4.创建服务账号,集群角色,绑定
-
[root@k8smaster kubernetes-jenkins]
# cat serviceAccount.yaml
-
---
-
apiVersion: rbac.authorization.k8s.io/v1
-
kind: ClusterRole
-
metadata:
-
name: jenkins-admin
-
rules:
-
- apiGroups: [
""]
-
resources: [
"*"]
-
verbs: [
"*"]
-
-
---
-
apiVersion: v1
-
kind: ServiceAccount
-
metadata:
-
name: jenkins-admin
-
namespace: devops-tools
-
-
---
-
apiVersion: rbac.authorization.k8s.io/v1
-
kind: ClusterRoleBinding
-
metadata:
-
name: jenkins-admin
-
roleRef:
-
apiGroup: rbac.authorization.k8s.io
-
kind: ClusterRole
-
name: jenkins-admin
-
subjects:
-
- kind: ServiceAccount
-
name: jenkins-admin
-
-
[root@k8smaster kubernetes-jenkins]
# kubectl apply -f serviceAccount.yaml
-
clusterrole.rbac.authorization.k8s.io/jenkins-admin created
-
serviceaccount/jenkins-admin created
-
clusterrolebinding.rbac.authorization.k8s.io/jenkins-admin created
-
-
# 5.创建卷,用来存放数据
-
[root@k8smaster kubernetes-jenkins]
# cat volume.yaml
-
kind: StorageClass
-
apiVersion: storage.k8s.io/v1
-
metadata:
-
name: local-storage
-
provisioner: kubernetes.io/no-provisioner
-
volumeBindingMode: WaitForFirstConsumer
-
-
---
-
apiVersion: v1
-
kind: PersistentVolume
-
metadata:
-
name: jenkins-pv-volume
-
labels:
-
type:
local
-
spec:
-
storageClassName: local-storage
-
claimRef:
-
name: jenkins-pv-claim
-
namespace: devops-tools
-
capacity:
-
storage: 10Gi
-
accessModes:
-
- ReadWriteOnce
-
local:
-
path: /mnt
-
nodeAffinity:
-
required:
-
nodeSelectorTerms:
-
- matchExpressions:
-
- key: kubernetes.io/hostname
-
operator: In
-
values:
-
- k8snode1
# 需要修改为k8s里的node节点的名字
-
-
---
-
apiVersion: v1
-
kind: PersistentVolumeClaim
-
metadata:
-
name: jenkins-pv-claim
-
namespace: devops-tools
-
spec:
-
storageClassName: local-storage
-
accessModes:
-
- ReadWriteOnce
-
resources:
-
requests:
-
storage: 3Gi
-
-
[root@k8smaster kubernetes-jenkins]
# kubectl apply -f volume.yaml
-
storageclass.storage.k8s.io/local-storage created
-
persistentvolume/jenkins-pv-volume created
-
persistentvolumeclaim/jenkins-pv-claim created
-
-
[root@k8smaster kubernetes-jenkins]
# kubectl get pv
-
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
-
jenkins-pv-volume 10Gi RWO Retain Bound devops-tools/jenkins-pv-claim local-storage 33s
-
pv-web 10Gi RWX Retain Bound default/pvc-web nfs 21h
-
-
[root@k8smaster kubernetes-jenkins]
# kubectl describe pv jenkins-pv-volume
-
Name: jenkins-pv-volume
-
Labels:
type=
local
-
Annotations: <none>
-
Finalizers: [kubernetes.io/pv-protection]
-
StorageClass: local-storage
-
Status: Bound
-
Claim: devops-tools/jenkins-pv-claim
-
Reclaim Policy: Retain
-
Access Modes: RWO
-
VolumeMode: Filesystem
-
Capacity: 10Gi
-
Node Affinity:
-
Required Terms:
-
Term 0: kubernetes.io/hostname
in [k8snode1]
-
Message:
-
Source:
-
Type: LocalVolume (a persistent volume backed by
local storage on a node)
-
Path: /mnt
-
Events: <none>
-
-
# 6.部署Jenkins
-
[root@k8smaster kubernetes-jenkins]
# cat deployment.yaml
-
apiVersion: apps/v1
-
kind: Deployment
-
metadata:
-
name: jenkins
-
namespace: devops-tools
-
spec:
-
replicas: 1
-
selector:
-
matchLabels:
-
app: jenkins-server
-
template:
-
metadata:
-
labels:
-
app: jenkins-server
-
spec:
-
securityContext:
-
fsGroup: 1000
-
runAsUser: 1000
-
serviceAccountName: jenkins-admin
-
containers:
-
- name: jenkins
-
image: jenkins/jenkins:lts
-
imagePullPolicy: IfNotPresent
-
resources:
-
limits:
-
memory:
"2Gi"
-
cpu:
"1000m"
-
requests:
-
memory:
"500Mi"
-
cpu:
"500m"
-
ports:
-
- name: httpport
-
containerPort: 8080
-
- name: jnlpport
-
containerPort: 50000
-
livenessProbe:
-
httpGet:
-
path:
"/login"
-
port: 8080
-
initialDelaySeconds: 90
-
periodSeconds: 10
-
timeoutSeconds: 5
-
failureThreshold: 5
-
readinessProbe:
-
httpGet:
-
path:
"/login"
-
port: 8080
-
initialDelaySeconds: 60
-
periodSeconds: 10
-
timeoutSeconds: 5
-
failureThreshold: 3
-
volumeMounts:
-
- name: jenkins-data
-
mountPath: /var/jenkins_home
-
volumes:
-
- name: jenkins-data
-
persistentVolumeClaim:
-
claimName: jenkins-pv-claim
-
-
[root@k8smaster kubernetes-jenkins]
# kubectl apply -f deployment.yaml
-
deployment.apps/jenkins created
-
-
[root@k8smaster kubernetes-jenkins]
# kubectl get deploy -n devops-tools
-
NAME READY UP-TO-DATE AVAILABLE AGE
-
jenkins 1/1 1 1 5m36s
-
-
[root@k8smaster kubernetes-jenkins]
# kubectl get pod -n devops-tools
-
NAME READY STATUS RESTARTS AGE
-
jenkins-7fdc8dd5fd-bg66q 1/1 Running 0 19s
-
-
# 7.启动服务发布Jenkins的pod
-
[root@k8smaster kubernetes-jenkins]
# cat service.yaml
-
apiVersion: v1
-
kind: Service
-
metadata:
-
name: jenkins-service
-
namespace: devops-tools
-
annotations:
-
prometheus.io/scrape:
'true'
-
prometheus.io/path: /
-
prometheus.io/port:
'8080'
-
spec:
-
selector:
-
app: jenkins-server
-
type: NodePort
-
ports:
-
- port: 8080
-
targetPort: 8080
-
nodePort: 32000
-
-
[root@k8smaster kubernetes-jenkins]
# kubectl apply -f service.yaml
-
service/jenkins-service created
-
-
[root@k8smaster kubernetes-jenkins]
# kubectl get svc -n devops-tools
-
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-
jenkins-service NodePort 10.104.76.252 <none> 8080:32000/TCP 24s
-
-
# 8.在Windows机器上访问Jenkins,宿主机ip+端口号
-
http://192.168.2.104:32000/login?from=%2F
-
-
# 9.进入pod里获取登录的密码
-
[root@k8smaster kubernetes-jenkins]
# kubectl exec -it jenkins-7fdc8dd5fd-bg66q -n devops-tools -- bash
-
bash-5.1$
cat /var/jenkins_home/secrets/initialAdminPassword
-
b0232e2dad164f89ad2221e4c46b0d46
-
-
# 修改密码
-
-
[root@k8smaster kubernetes-jenkins]
# kubectl get pod -n devops-tools
-
NAME READY STATUS RESTARTS AGE
-
jenkins-7fdc8dd5fd-5nn7m 1/1 Running 0 91s