【无标题】

目录

项目架构图

项目描述

项目环境

环境准备

IP地址规划

关闭selinux和firewall

配置静态ip地址

修改主机名

升级系统(可做可不做)

添加hosts解析

项目步骤

一.使用ProcessOn设计了整个集群的架构,规划好服务器的IP地址,使用kubeadm安装k8s单master的集群环境(1个master+2个node节点)。

二.部署ansible完成相关软件的自动化运维工作,部署防火墙服务器,部署堡垒机。

部署堡垒机

部署firewall服务器

三.部署nfs服务器,为整个web集群提供数据,让所有的web业务pod都去访问,通过pv、pvc和卷挂载实现。

四.构建CI/CD环境,部署gitlab,Jenkins,harbor实现相关的代码发布,镜像制作,数据备份等流水线工作。

1.部署gitlab

2.部署Jenkins

3.部署harbor

五.将自己用go开发的web接口系统制作成镜像,部署到k8s里作为web应用;采用HPA技术,当cpu使用率达到50%的时候,进行水平扩缩,最小20个业务pod,最多40个业务pod。

六.启动mysql的pod,为web业务提供数据库服务。

尝试:k8s部署有状态的MySQL

七.使用探针(liveness、readiness、startup)的(httpget、exec)方法对web业务pod进行监控,一旦出现问题马上重启,增强业务pod的可靠性。

八.使用ingress给web业务做负载均衡,使用dashboard对整个集群资源进行掌控。

使用dashboard对整个集群资源进行掌控

九.安装zabbix和promethues对整个集群资源(cpu,内存,网络带宽,web服务,数据库服务,磁盘IO等)进行监控。

十.使用测试软件ab对整个k8s集群和相关的服务器进行压力测试。


项目架构图

项目描述

模拟公司的web业务,部署k8s,web,MySQL,nfs,harbor,zabbix,Prometheus,gitlab,Jenkins,ansible环境,保障web业务的高可用,达到一个高负载的生产环境。

项目环境

CentOS 7.9,ansible 2.9.27,Docker 20.10.6,Docker Compose 2.18.1,Kubernetes 1.20.6,Calico 3.23,Harbor 2.4.1,nfs v4,metrics-server 0.6.0,ingress-nginx-controllerv1.1.0,kube-webhook-certgen-v1.1.0,MySQL 5.7.42,Dashboard v2.5.0,Prometheus 2.34.0,zabbix 5.0,Grafana 10.0.0,jenkinsci/blueocean,Gitlab-16.0.4-jh。
    
    

环境准备

10台全新的Linux服务器,关闭firewall和seLinux,配置静态ip地址,修改主机名,添加hosts解析

IP地址规划

serverip
k8smaster192.168.2.104
k8snode1192.168.2.111
k8snode2192.168.2.112
ansibe192.168.2.119
nfs192.168.2.121
gitlab192.168.2.124
harbor192.168.2.106
zabbix192.168.2.117
firewalld192.168.2.141
Bastionhost192.168.2.140

关闭selinux和firewall


   
   
  1. # 防火墙并且设置防火墙开启不启动
  2. service firewalld stop && systemctl disable firewalld
  3. # 临时关闭seLinux
  4. setenforce 0
  5. # 永久关闭seLinux
  6. sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
  7. [root@k8smaster ~] # service firewalld stop
  8. Redirecting to /bin/systemctl stop firewalld.service
  9. [root@k8smaster ~] # systemctl disable firewalld
  10. Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
  11. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
  12. [root@k8smaster ~] # reboot
  13. [root@k8smaster ~] # getenforce
  14. Disabled

配置静态ip地址


   
   
  1. cd /etc/sysconfig/network-scripts/
  2. vim ifcfg-ens33
  3. TYPE= "Ethernet"
  4. BOOTPROTO= "static"
  5. DEVICE= "ens33"
  6. NAME= "ens33"
  7. ONBOOT= "yes"
  8. IPADDR= "192.168.2.104"
  9. PREFIX=24
  10. GATEWAY= "192.168.2.1"
  11. DNS1=114.114.114.114
  12. TYPE= "Ethernet"
  13. BOOTPROTO= "static"
  14. DEVICE= "ens33"
  15. NAME= "ens33"
  16. ONBOOT= "yes"
  17. IPADDR= "192.168.2.111"
  18. PREFIX=24
  19. GATEWAY= "192.168.2.1"
  20. DNS1=114.114.114.114
  21. TYPE= "Ethernet"
  22. BOOTPROTO= "static"
  23. DEVICE= "ens33"
  24. NAME= "ens33"
  25. ONBOOT= "yes"
  26. IPADDR= "192.168.2.112"
  27. PREFIX=24
  28. GATEWAY= "192.168.2.1"
  29. DNS1=114.114.114.114

修改主机名


   
   
  1. hostnamcectl set-hostname k8smaster
  2. hostnamcectl set-hostname k8snode1
  3. hostnamcectl set-hostname k8snode2
  4. #切换用户,重新加载环境
  5. su - root
  6. [root@k8smaster ~] #
  7. [root@k8snode1 ~] #
  8. [root@k8snode2 ~] #

升级系统(可做可不做)

yum update -y
   
   

添加hosts解析


   
   
  1. vim /etc/hosts
  2. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  3. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  4. 192.168.2.104 k8smaster
  5. 192.168.2.111 k8snode1
  6. 192.168.2.112 k8snode2

项目步骤

一.使用ProcessOn设计了整个集群的架构,规划好服务器的IP地址,使用kubeadm安装k8s单master的集群环境(1个master+2个node节点)。


   
   
  1. # 1.互相之间建立免密通道
  2. ssh-keygen # 一路回车
  3. ssh-copy-id k8smaster
  4. ssh-copy-id k8snode1
  5. ssh-copy-id k8snode2
  6. # 2.关闭交换分区(Kubeadm初始化的时候会检测)
  7. # 临时关闭:swapoff -a
  8. # 永久关闭:注释swap挂载,给swap这行开头加一下注释
  9. [root@k8smaster ~] # cat /etc/fstab
  10. #
  11. # /etc/fstab
  12. # Created by anaconda on Thu Mar 23 15:22:20 2023
  13. #
  14. # Accessible filesystems, by reference, are maintained under '/dev/disk'
  15. # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
  16. #
  17. /dev/mapper/centos-root / xfs defaults 0 0
  18. UUID=00236222-82bd-4c15-9c97-e55643144ff3 /boot xfs defaults 0 0
  19. /dev/mapper/centos-home /home xfs defaults 0 0
  20. #/dev/mapper/centos-swap swap swap defaults 0 0
  21. # 3.加载相关内核模块
  22. modprobe br_netfilter
  23. echo "modprobe br_netfilter" >> /etc/profile
  24. cat > /etc/sysctl.d/k8s.conf << EOF
  25. net.bridge.bridge-nf-call-ip6tables = 1
  26. net.bridge.bridge-nf-call-iptables = 1
  27. net.ipv4.ip_forward = 1
  28. EOF
  29. #重新加载,使配置生效
  30. sysctl -p /etc/sysctl.d/k8s.conf
  31. # 为什么要执行modprobe br_netfilter?
  32. #    "modprobe br_netfilter"命令用于在Linux系统中加载br_netfilter内核模块。这个模块是Linux内# 核中的一个网络桥接模块,它允许管理员使用iptables等工具对桥接到同一网卡的流量进行过滤和管理。
  33. # 因为要使用Linux系统作为路由器或防火墙,并且需要对来自不同网卡的数据包进行过滤、转发或NAT操作。
  34. # 为什么要开启net.ipv4.ip_forward = 1参数?
  35. #   要让Linux系统具有路由转发功能,需要配置一个Linux的内核参数net.ipv4.ip_forward。这个参数指# 定了Linux系统当前对路由转发功能的支持情况;其值为0时表示禁止进行IP转发;如果是1,则说明IP转发# 功能已经打开。
  36. # 4.配置阿里云的repo源
  37. yum install -y yum-utils
  38. yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  39. yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet ipvsadm
  40. # 5.配置安装k8s组件需要的阿里云的repo源
  41. [root@k8smaster ~] # vim /etc/yum.repos.d/kubernetes.repo
  42. [kubernetes]
  43. name=Kubernetes
  44. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
  45. enabled=1
  46. gpgcheck=0
  47. # 6.配置时间同步
  48. [root@k8smaster ~] # crontab -e
  49. * */1 * * * /usr/sbin/ntpdate   cn.pool.ntp.org
  50. #重启crond服务
  51. [root@k8smaster ~] # service crond restart
  52. # 7.安装docker服务
  53. yum install docker-ce-20.10.6 -y
  54. # 启动docker,设置开机自启
  55. systemctl start docker && systemctl enable docker.service
  56. # 8.配置docker镜像加速器和驱动
  57. vim /etc/docker/daemon.json
  58. {
  59. "registry-mirrors":[ "https://rsbud4vc.mirror.aliyuncs.com", "https://registry.docker-cn.com", "https://docker.mirrors.ustc.edu.cn", "https://dockerhub.azk8s.cn", "http://hub-mirror.c.163.com"],
  60. "exec-opts": [ "native.cgroupdriver=systemd"]
  61. }
  62. # 重新加载配置,重启docker服务
  63. systemctl daemon-reload && systemctl restart docker
  64. # 9.安装初始化k8s需要的软件包
  65. yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6
  66. # 设置kubelet开机启动
  67. systemctl enable kubelet
  68. #注:每个软件包的作用
  69. #Kubeadm:  kubeadm是一个工具,用来初始化k8s集群的
  70. #kubelet:   安装在集群所有节点上,用于启动Pod的
  71. #kubectl:   通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件
  72. # 10.kubeadm初始化k8s集群
  73. # 把初始化k8s集群需要的离线镜像包上传到k8smaster、k8snode1、k8snode2机器上,然后解压
  74. docker load -i k8simage-1-20-6.tar.gz
  75. # 把文件远程拷贝到node节点
  76. root@k8smaster ~] # scp k8simage-1-20-6.tar.gz root@k8snode1:/root
  77. root@k8smaster ~] # scp k8simage-1-20-6.tar.gz root@k8snode2:/root
  78. # 查看镜像
  79. [root@k8snode1 ~] # docker images
  80. REPOSITORY TAG IMAGE ID CREATED SIZE
  81. registry.aliyuncs.com/google_containers/kube-proxy v1.20.6 9a1ebfd8124d 2 years ago 118MB
  82. registry.aliyuncs.com/google_containers/kube-scheduler v1.20.6 b93ab2ec4475 2 years ago 47.3MB
  83. registry.aliyuncs.com/google_containers/kube-controller-manager v1.20.6 560dd11d4550 2 years ago 116MB
  84. registry.aliyuncs.com/google_containers/kube-apiserver v1.20.6 b05d611c1af9 2 years ago 122MB
  85. calico/pod2daemon-flexvol v3.18.0 2a22066e9588 2 years ago 21.7MB
  86. calico/node v3.18.0 5a7c4970fbc2 2 years ago 172MB
  87. calico/cni v3.18.0 727de170e4ce 2 years ago 131MB
  88. calico/kube-controllers v3.18.0 9a154323fbf7 2 years ago 53.4MB
  89. registry.aliyuncs.com/google_containers/etcd 3.4.13-0 0369cf4303ff 2 years ago 253MB
  90. registry.aliyuncs.com/google_containers/coredns 1.7.0 bfe3a36ebd25 3 years ago 45.2MB
  91. registry.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 3 years ago 683kB
  92. # 11.使用kubeadm初始化k8s集群
  93. kubeadm config print init-defaults > kubeadm.yaml
  94. [root@k8smaster ~] # vim kubeadm.yaml
  95. apiVersion: kubeadm.k8s.io/v1beta2
  96. bootstrapTokens:
  97. - groups:
  98. - system:bootstrappers:kubeadm:default-node-token
  99. token: abcdef.0123456789abcdef
  100. ttl: 24h0m0s
  101. usages:
  102. - signing
  103. - authentication
  104. kind: InitConfiguration
  105. localAPIEndpoint:
  106. advertiseAddress: 192.168.2.104 #控制节点的ip
  107. bindPort: 6443
  108. nodeRegistration:
  109. criSocket: /var/run/dockershim.sock
  110. name: k8smaster #控制节点主机名
  111. taints:
  112. - effect: NoSchedule
  113. key: node-role.kubernetes.io/master
  114. ---
  115. apiServer:
  116. timeoutForControlPlane: 4m0s
  117. apiVersion: kubeadm.k8s.io/v1beta2
  118. certificatesDir: /etc/kubernetes/pki
  119. clusterName: kubernetes
  120. controllerManager: {}
  121. dns:
  122. type: CoreDNS
  123. etcd:
  124. local:
  125. dataDir: /var/lib/etcd
  126. imageRepository: registry.aliyuncs.com/google_containers # 需要修改为阿里云的仓库
  127. kind: ClusterConfiguration
  128. kubernetesVersion: v1.20.6
  129. networking:
  130. dnsDomain: cluster.local
  131. serviceSubnet: 10.96.0.0/12
  132. podSubnet: 10.244.0.0/16 #指定pod网段,需要新增加这个
  133. scheduler: {}
  134. #追加如下几行
  135. ---
  136. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  137. kind: KubeProxyConfiguration
  138. mode: ipvs
  139. ---
  140. apiVersion: kubelet.config.k8s.io/v1beta1
  141. kind: KubeletConfiguration
  142. cgroupDriver: systemd
  143. # 12.基于kubeadm.yaml文件初始化k8s
  144. [root@k8smaster ~] # kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification
  145. mkdir -p $HOME/.kube
  146. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  147. sudo chown $( id -u):$( id -g) $HOME/.kube/config
  148. kubeadm join 192.168.2.104:6443 --token abcdef.0123456789abcdef \
  149. --discovery-token-ca-cert-hash sha256:83421a7d1baa62269508259b33e6563e45fbeb9139a9c214cbe9fc107f07cb4c
  150. # 13.扩容k8s集群-添加工作节点
  151. [root@k8snode1 ~] # kubeadm join 192.168.2.104:6443 --token abcdef.0123456789abcdef \
  152. --discovery-token-ca-cert-hash sha256:83421a7d1baa62269508259b33e6563e45fbeb9139a9c214cbe9fc107f07cb4c
  153. [root@k8snode2 ~] # kubeadm join 192.168.2.104:6443 --token abcdef.0123456789abcdef \
  154. --discovery-token-ca-cert-hash sha256:83421a7d1baa62269508259b33e6563e45fbeb9139a9c214cbe9fc107f07cb4c
  155. # 14.在k8smaster上查看集群节点状况
  156. [root@k8smaster ~] # kubectl get nodes
  157. NAME STATUS ROLES AGE VERSION
  158. k8smaster NotReady control-plane,master 2m49s v1.20.6
  159. k8snode1 NotReady <none> 19s v1.20.6
  160. k8snode2 NotReady <none> 14s v1.20.6
  161. # 15.k8snode1,k8snode2的ROLES角色为空,<none>就表示这个节点是工作节点。
  162. 可以把k8snode1,k8snode2的ROLES变成work
  163. [root@k8smaster ~] # kubectl label node k8snode1 node-role.kubernetes.io/worker=worker
  164. node/k8snode1 labeled
  165. [root@k8smaster ~] # kubectl label node k8snode2 node-role.kubernetes.io/worker=worker
  166. node/k8snode2 labeled
  167. [root@k8smaster ~] # kubectl get nodes
  168. NAME STATUS ROLES AGE VERSION
  169. k8smaster NotReady control-plane,master 2m43s v1.20.6
  170. k8snode1 NotReady worker 2m15s v1.20.6
  171. k8snode2 NotReady worker 2m11s v1.20.6
  172. # 注意:上面状态都是NotReady状态,说明没有安装网络插件
  173. # 16.安装kubernetes网络组件-Calico
  174. # 上传calico.yaml到k8smaster上,使用yaml文件安装calico网络插件 。
  175. wget https://docs.projectcalico.org/v3.23/manifests/calico.yaml --no-check-certificate
  176. [root@k8smaster ~] # kubectl apply -f calico.yaml
  177. configmap/calico-config created
  178. customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
  179. customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
  180. customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
  181. customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
  182. customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
  183. customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
  184. customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
  185. customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
  186. customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
  187. customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
  188. customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
  189. customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
  190. customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
  191. customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
  192. customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
  193. clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
  194. clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
  195. clusterrole.rbac.authorization.k8s.io/calico-node created
  196. clusterrolebinding.rbac.authorization.k8s.io/calico-node created
  197. daemonset.apps/calico-node created
  198. serviceaccount/calico-node created
  199. deployment.apps/calico-kube-controllers created
  200. serviceaccount/calico-kube-controllers created
  201. poddisruptionbudget.policy/calico-kube-controllers created
  202. # 再次查看集群状态
  203. [root@k8smaster ~] # kubectl get nodes
  204. NAME STATUS ROLES AGE VERSION
  205. k8smaster Ready control-plane,master 5m57s v1.20.6
  206. k8snode1 Ready worker 3m27s v1.20.6
  207. k8snode2 Ready worker 3m22s v1.20.6
  208. # STATUS状态是Ready,说明k8s集群正常运行了

二.部署ansible完成相关软件的自动化运维工作,部署防火墙服务器,部署堡垒机。


   
   
  1. # 1.建立免密通道 在ansible主机上生成密钥对
  2. [root@ansible ~] # ssh-keygen -t ecdsa
  3. Generating public/private ecdsa key pair.
  4. Enter file in which to save the key (/root/.ssh/id_ecdsa):
  5. Created directory '/root/.ssh'.
  6. Enter passphrase (empty for no passphrase):
  7. Enter same passphrase again:
  8. Your identification has been saved in /root/.ssh/id_ecdsa.
  9. Your public key has been saved in /root/.ssh/id_ecdsa.pub.
  10. The key fingerprint is:
  11. SHA256:FNgCSDVk6i3foP88MfekA2UzwNn6x3kyi7V+mLdoxYE root@ansible
  12. The key 's randomart image is:
  13. +---[ECDSA 256]---+
  14. | ..+*o =. |
  15. | .o .* o. |
  16. | . +. . |
  17. | . . ..= E . |
  18. | o o +S+ o . |
  19. | + o+ o O + |
  20. | . . .= B X |
  21. | . .. + B.o |
  22. | ..o. +oo.. |
  23. +----[SHA256]-----+
  24. [root@ansible ~]# cd /root/.ssh
  25. [root@ansible .ssh]# ls
  26. id_ecdsa id_ecdsa.pub
  27. # 2.上传公钥到所有服务器的root用户家目录下
  28. # 所有服务器上开启ssh服务 ,开放22号端口,允许root用户登录
  29. # 上传公钥到k8smaster
  30. [root@ansible .ssh]# ssh-copy-id -i id_ecdsa.pub root@192.168.2.104
  31. /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "id_ecdsa.pub"
  32. The authenticity of host '192.168.2.104 (192.168.2.104) ' can't be established.
  33. ECDSA key fingerprint is SHA256:l7LRfACELrI6mU2XvYaCz+sDBWiGkYnAecPgnxJxdvE.
  34. ECDSA key fingerprint is MD5:b6:f7:e1:c5:23:24:5c:16:1f:66:42:ba:80:a6:3c:fd.
  35. Are you sure you want to continue connecting ( yes/no)? yes
  36. /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
  37. /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
  38. root@192.168.2.104 's password:
  39. Number of key(s) added: 1
  40. Now try logging into the machine, with: "ssh 'root@192.168.2.104 '"
  41. and check to make sure that only the key(s) you wanted were added.
  42. # 上传公钥到k8snode
  43. [root@ansible .ssh]# ssh-copy-id -i id_ecdsa.pub root@192.168.2.111
  44. /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "id_ecdsa.pub"
  45. The authenticity of host '192.168.2.111 (192.168.2.111) ' can't be established.
  46. ECDSA key fingerprint is SHA256:l7LRfACELrI6mU2XvYaCz+sDBWiGkYnAecPgnxJxdvE.
  47. ECDSA key fingerprint is MD5:b6:f7:e1:c5:23:24:5c:16:1f:66:42:ba:80:a6:3c:fd.
  48. Are you sure you want to continue connecting ( yes/no)? yes
  49. /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
  50. /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
  51. root@192.168.2.111 's password:
  52. Number of key(s) added: 1
  53. Now try logging into the machine, with: "ssh 'root@192.168.2.111 '"
  54. and check to make sure that only the key(s) you wanted were added.
  55. [root@ansible .ssh]# ssh-copy-id -i id_ecdsa.pub root@192.168.2.112
  56. /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "id_ecdsa.pub"
  57. The authenticity of host '192.168.2.112 (192.168.2.112) ' can't be established.
  58. ECDSA key fingerprint is SHA256:l7LRfACELrI6mU2XvYaCz+sDBWiGkYnAecPgnxJxdvE.
  59. ECDSA key fingerprint is MD5:b6:f7:e1:c5:23:24:5c:16:1f:66:42:ba:80:a6:3c:fd.
  60. Are you sure you want to continue connecting ( yes/no)? yes
  61. /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
  62. /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
  63. root@192.168.2.112 's password:
  64. Number of key(s) added: 1
  65. Now try logging into the machine, with: "ssh 'root@192.168.2.112 '"
  66. and check to make sure that only the key(s) you wanted were added.
  67. # 验证是否实现免密码密钥认证
  68. [root@ansible .ssh]# ssh root@192.168.2.121
  69. Last login: Tue Jun 20 10:33:33 2023 from 192.168.2.240
  70. [root@nfs ~]# exit
  71. 登出
  72. Connection to 192.168.2.121 closed.
  73. [root@ansible .ssh]# ssh root@192.168.2.112
  74. Last login: Tue Jun 20 10:34:18 2023 from 192.168.2.240
  75. [root@k8snode2 ~]# exit
  76. 登出
  77. Connection to 192.168.2.112 closed.
  78. [root@ansible .ssh]#
  79. # 3.安装ansible,在管理节点上
  80. # 目前,只要机器上安装了 Python 2.6 或 Python 2.7 (windows系统不可以做控制主机),都可以运行Ansible.
  81. [root@ansible .ssh]# yum install epel-release -y
  82. [root@ansible .ssh]# yum install ansible -y
  83. [root@ansible ~]# ansible --version
  84. ansible 2.9.27
  85. config file = /etc/ansible/ansible.cfg
  86. configured module search path = [u'/root/.ansible/plugins/modules ', u'/usr/share/ansible/plugins/modules ']
  87. ansible python module location = /usr/lib/python2.7/site-packages/ansible
  88. executable location = /usr/bin/ansible
  89. python version = 2.7.5 (default, Oct 14 2020, 14:45:30) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
  90. # 4.编写主机清单
  91. [root@ansible .ssh]# cd /etc/ansible
  92. [root@ansible ansible]# ls
  93. ansible.cfg hosts roles
  94. [root@ansible ansible]# vim hosts
  95. ## 192.168.1.110
  96. [k8smaster]
  97. 192.168.2.104
  98. [k8snode]
  99. 192.168.2.111
  100. 192.168.2.112
  101. [nfs]
  102. 192.168.2.121
  103. [gitlab]
  104. 192.168.2.124
  105. [harbor]
  106. 192.168.2.106
  107. [zabbix]
  108. 192.168.2.117
  109. # 测试
  110. [root@ansible ansible]# ansible all -m shell -a "ip add"
部署堡垒机

仅需两步快速安装 JumpServer:
准备一台 2核4G (最低)且可以访问互联网的 64 位 Linux 主机;
以 root 用户执行如下命令一键安装 JumpServer。

curl -sSL https://resource.fit2cloud.com/jumpserver/jumpserver/releases/latest/download/quick_start.sh | bash
   
   

部署firewall服务器

   
   
  1. # 关闭虚拟机,增加一块网卡(ens37)
  2. # 编写脚本实现SNAT_DNAT功能
  3. [root@firewalld ~] # cat snat_dnat.sh
  4. #!/bin/bash
  5. # open route
  6. echo 1 >/proc/sys/net/ipv4/ip_forward
  7. # stop firewall
  8. systemctl stop firewalld
  9. systemctl disable firewalld
  10. # clear iptables rule
  11. iptables -F
  12. iptables -t nat -F
  13. # enable snat
  14. iptables -t nat -A POSTROUTING -s 192.168.2.0/24 -o ens33 -j MASQUERADE
  15. #内网来的192.168.2.0网段过来的ip地址全部伪装(替换)为ens33接口的公网ip地址,好处就是不需要考虑ens33接口的ip地址是多少,你是哪个ip地址,我就伪装成哪个ip地址
  16. # enable dnat
  17. iptables -t nat -A PREROUTING -d 192.168.0.169 -i ens33 -p tcp --dport 2233 -j DNAT --to-destination 192.168.2.104:22
  18. # open web 80
  19. iptables -t nat -A PREROUTING -d 192.168.0.169 -i ens33 -p tcp --dport 80 -j DNAT --to-destination 192.168.2.104:80
  20. # web服务器上操作
  21. [root@k8smaster ~] # cat open_app.sh
  22. #!/bin/bash
  23. # open ssh
  24. iptables -t filter -A INPUT -p tcp --dport 22 -j ACCEPT
  25. # open dns
  26. iptables -t filter -A INPUT -p udp --dport 53 -s 192.168.2.0/24 -j ACCEPT
  27. # open dhcp
  28. iptables -t filter -A INPUT -p udp --dport 67 -j ACCEPT
  29. # open http/https
  30. iptables -t filter -A INPUT -p tcp --dport 80 -j ACCEPT
  31. iptables -t filter -A INPUT -p tcp --dport 443 -j ACCEPT
  32. # open mysql
  33. iptables -t filter -A INPUT -p tcp --dport 3306 -j ACCEPT
  34. # default policy DROP
  35. iptables -t filter -P INPUT DROP
  36. # drop icmp request
  37. iptables -t filter -A INPUT -p icmp --icmp-type 8 -j DROP

三.部署nfs服务器,为整个web集群提供数据,让所有的web业务pod都去访问,通过pv、pvc和卷挂载实现。


   
   
  1. # 1.搭建好nfs服务器
  2. [root@nfs ~] # yum install nfs-utils -y
  3. # 建议k8s集群内的所有的节点都安装nfs-utils软件,因为节点服务器里创建卷需要支持nfs网络文件系统
  4. [root@k8smaster ~] # yum install nfs-utils -y
  5. [root@k8smaster ~] # service nfs restart
  6. Redirecting to /bin/systemctl restart nfs.service
  7. [root@k8smaster ~] # ps aux |grep nfs
  8. root 87368 0.0 0.0 0 0 ? S< 16:49 0:00 [nfsd4_callbacks]
  9. root 87374 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
  10. root 87375 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
  11. root 87376 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
  12. root 87377 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
  13. root 87378 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
  14. root 87379 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
  15. root 87380 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
  16. root 87381 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
  17. root 96648 0.0 0.0 112824 988 pts/0 S+ 17:02 0:00 grep --color=auto nfs
  18. # 2.设置共享目录
  19. [root@nfs ~] # vim /etc/exports
  20. [root@nfs ~] # cat /etc/exports
  21. /web 192.168.2.0/24(rw,no_root_squash, sync)
  22. # 3.新建共享目录和index.html
  23. [root@nfs ~] # mkdir /web
  24. [root@nfs ~] # cd /web
  25. [root@nfs web] # echo "welcome to changsha" >index.html
  26. [root@nfs web] # ls
  27. index.html
  28. [root@nfs web] # ll -d /web
  29. drwxr-xr-x. 2 root root 24 6月 18 16:46 /web
  30. # 4.刷新nfs或者重新输出共享目录
  31. [root@nfs ~] # exportfs -r #输出所有共享目录
  32. [root@nfs ~] # exportfs -v #显示输出的共享目录
  33. /web 192.168.2.0/24( sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
  34. # 5.重启nfs服务并且设置nfs开机自启
  35. [root@nfs web] # systemctl restart nfs && systemctl enable nfs
  36. Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
  37. # 6.在k8s集群里的任意一个节点服务器上测试能否挂载nfs服务器共享的目录
  38. [root@k8snode1 ~] # mkdir /node1_nfs
  39. [root@k8snode1 ~] # mount 192.168.2.121:/web /node1_nfs
  40. 您在 /var/spool/mail/root 中有新邮件
  41. [root@k8snode1 ~] # df -Th|grep nfs
  42. 192.168.2.121:/web nfs4 17G 1.5G 16G 9% /node1_nfs
  43. # 7.取消挂载
  44. [root@k8snode1 ~] # umount /node1_nfs
  45. # 8.创建pv使用nfs服务器上的共享目录
  46. [root@k8smaster pv] # vim nfs-pv.yml
  47. [root@k8smaster pv] # cat nfs-pv.yml
  48. apiVersion: v1
  49. kind: PersistentVolume
  50. metadata:
  51. name: pv-web
  52. labels:
  53. type: pv-web
  54. spec:
  55. capacity:
  56. storage: 10Gi
  57. accessModes:
  58. - ReadWriteMany
  59. storageClassName: nfs # pv对应的名字
  60. nfs:
  61. path: "/web" # nfs共享的目录
  62. server: 192.168.2.121 # nfs服务器的ip地址
  63. readOnly: false # 访问模式
  64. [root@k8smaster pv] # kubectl apply -f nfs-pv.yml
  65. persistentvolume/pv-web created
  66. [root@k8smaster pv] # kubectl get pv
  67. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  68. pv-web 10Gi RWX Retain Available nfs 5s
  69. # 9.创建pvc使用pv
  70. [root@k8smaster pv] # vim nfs-pvc.yml
  71. [root@k8smaster pv] # cat nfs-pvc.yml
  72. apiVersion: v1
  73. kind: PersistentVolumeClaim
  74. metadata:
  75. name: pvc-web
  76. spec:
  77. accessModes:
  78. - ReadWriteMany
  79. resources:
  80. requests:
  81. storage: 1Gi
  82. storageClassName: nfs #使用nfs类型的pv
  83. [root@k8smaster pv] # kubectl apply -f pvc-nfs.yaml
  84. persistentvolumeclaim/sc-nginx-pvc created
  85. [root@k8smaster pv] # kubectl apply -f nfs-pvc.yml
  86. persistentvolumeclaim/pvc-web created
  87. [root@k8smaster pv] # kubectl get pvc
  88. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  89. pvc-web Bound pv-web 10Gi RWX nfs 6s
  90. # 10.创建pod使用pvc
  91. [root@k8smaster pv] # vim nginx-deployment.yaml
  92. [root@k8smaster pv] # cat nginx-deployment.yaml
  93. apiVersion: apps/v1
  94. kind: Deployment
  95. metadata:
  96. name: nginx-deployment
  97. labels:
  98. app: nginx
  99. spec:
  100. replicas: 3
  101. selector:
  102. matchLabels:
  103. app: nginx
  104. template:
  105. metadata:
  106. labels:
  107. app: nginx
  108. spec:
  109. volumes:
  110. - name: sc-pv-storage-nfs
  111. persistentVolumeClaim:
  112. claimName: pvc-web
  113. containers:
  114. - name: sc-pv-container-nfs
  115. image: nginx
  116. imagePullPolicy: IfNotPresent
  117. ports:
  118. - containerPort: 80
  119. name: "http-server"
  120. volumeMounts:
  121. - mountPath: "/usr/share/nginx/html"
  122. name: sc-pv-storage-nfs
  123. [root@k8smaster pv] # kubectl apply -f nginx-deployment.yaml
  124. deployment.apps/nginx-deployment created
  125. [root@k8smaster pv] # kubectl get pod -o wide
  126. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  127. nginx-deployment-76855d4d79-2q4vh 1/1 Running 0 42s 10.244.185.194 k8snode2 <none> <none>
  128. nginx-deployment-76855d4d79-mvgq7 1/1 Running 0 42s 10.244.185.195 k8snode2 <none> <none>
  129. nginx-deployment-76855d4d79-zm8v4 1/1 Running 0 42s 10.244.249.3 k8snode1 <none> <none>
  130. # 11.测试访问
  131. [root@k8smaster pv] # curl 10.244.185.194
  132. welcome to changsha
  133. [root@k8smaster pv] # curl 10.244.185.195
  134. welcome to changsha
  135. [root@k8smaster pv] # curl 10.244.249.3
  136. welcome to changsha
  137. [root@k8snode1 ~] # curl 10.244.185.194
  138. welcome to changsha
  139. [root@k8snode1 ~] # curl 10.244.185.195
  140. welcome to changsha
  141. [root@k8snode1 ~] # curl 10.244.249.3
  142. welcome to changsha
  143. [root@k8snode2 ~] # curl 10.244.185.194
  144. welcome to changsha
  145. [root@k8snode2 ~] # curl 10.244.185.195
  146. welcome to changsha
  147. [root@k8snode2 ~] # curl 10.244.249.3
  148. welcome to changsha
  149. # 12.修改内容
  150. [root@nfs web] # echo "hello,world" >> index.html
  151. [root@nfs web] # cat index.html
  152. welcome to changsha
  153. hello,world
  154. # 13.再次访问
  155. [root@k8snode1 ~] # curl 10.244.249.3
  156. welcome to changsha
  157. hello,world

四.构建CI/CD环境,部署gitlab,Jenkins,harbor实现相关的代码发布,镜像制作,数据备份等流水线工作。

1.部署gitlab

   
   
  1. # 部署gitlab
  2. https://gitlab.cn/install/
  3. [root@localhost ~] # hostnamectl set-hostname gitlab
  4. [root@localhost ~] # su - root
  5. su - root
  6. 上一次登录:日 6月 18 18:28:08 CST 2023从 192.168.2.240pts/0 上
  7. [root@gitlab ~] # cd /etc/sysconfig/network-scripts/
  8. [root@gitlab network-scripts] # vim ifcfg-ens33
  9. [root@gitlab network-scripts] # service network restart
  10. Restarting network (via systemctl): [ 确定 ]
  11. [root@gitlab network-scripts] # sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
  12. [root@gitlab network-scripts] # service firewalld stop && systemctl disable firewalld
  13. Redirecting to /bin/systemctl stop firewalld.service
  14. Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
  15. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
  16. [root@gitlab network-scripts] # reboot
  17. [root@gitlab ~] # getenforce
  18. Disabled
  19. # 1.安装和配置必须的依赖项
  20. yum install -y curl policycoreutils-python openssh-server perl
  21. # 2.配置极狐GitLab 软件源镜像
  22. [root@gitlab ~] # curl -fsSL https://packages.gitlab.cn/repository/raw/scripts/setup.sh | /bin/bash
  23. ==> Detected OS centos
  24. ==> Add yum repo file to /etc/yum.repos.d/gitlab-jh.repo
  25. [gitlab-jh]
  26. name=JiHu GitLab
  27. baseurl=https://packages.gitlab.cn/repository/el/ $releasever/
  28. gpgcheck=0
  29. gpgkey=https://packages.gitlab.cn/repository/raw/gpg/public.gpg.key
  30. priority=1
  31. enabled=1
  32. ==> Generate yum cache for gitlab-jh
  33. ==> Successfully added gitlab-jh repo. To install JiHu GitLab, run "sudo yum/dnf install gitlab-jh".
  34. [root@gitlab ~] # yum install gitlab-jh -y
  35. Thank you for installing JiHu GitLab!
  36. GitLab was unable to detect a valid hostname for your instance.
  37. Please configure a URL for your JiHu GitLab instance by setting `external_url`
  38. configuration in /etc/gitlab/gitlab.rb file.
  39. Then, you can start your JiHu GitLab instance by running the following command:
  40. sudo gitlab-ctl reconfigure
  41. For a comprehensive list of configuration options please see the Omnibus GitLab readme
  42. https://jihulab.com/gitlab-cn/omnibus-gitlab/-/blob/main-jh/README.md
  43. Help us improve the installation experience, let us know how we did with a 1 minute survey:
  44. https://wj.qq.com/s2/10068464/dc66
  45. [root@gitlab ~] # vim /etc/gitlab/gitlab.rb
  46. external_url 'http://myweb.first.com'
  47. [root@gitlab ~] # gitlab-ctl reconfigure
  48. Notes:
  49. Default admin account has been configured with following details:
  50. Username: root
  51. Password: You didn 't opt-in to print initial root password to STDOUT.
  52. Password stored to /etc/gitlab/initial_root_password. This file will be cleaned up in first reconfigure run after 24 hours.
  53. NOTE: Because these credentials might be present in your log files in plain text, it is highly recommended to reset the password following https://docs.gitlab.com/ee/security/reset_user_password.html#reset-your-root-password.
  54. gitlab Reconfigured!
  55. # 查看密码
  56. [root@gitlab ~]# cat /etc/gitlab/initial_root_password
  57. # WARNING: This value is valid only in the following conditions
  58. # 1. If provided manually (either via `GITLAB_ROOT_PASSWORD` environment variable or via `gitlab_rails['initial_root_password ']` setting in `gitlab.rb`, it was provided before database was seeded for the first time (usually, the first reconfigure run).
  59. # 2. Password hasn't been changed manually, either via UI or via command line.
  60. #
  61. # If the password shown here doesn't work, you must reset the admin password following https://docs.gitlab.com/ee/security/reset_user_password.html#reset-your-root-password.
  62. Password: Al5rgYomhXDz5kNfDl3y8qunrSX334aZZxX5vONJ05s=
  63. # NOTE: This file will be automatically deleted in the first reconfigure run after 24 hours.
  64. # 可以登录后修改语言为中文
  65. # 用户的profile/preferences
  66. # 修改密码
  67. [root@gitlab ~] # gitlab-rake gitlab:env:info
  68. System information
  69. System:
  70. Proxy: no
  71. Current User: git
  72. Using RVM: no
  73. Ruby Version: 3.0.6p216
  74. Gem Version: 3.4.13
  75. Bundler Version:2.4.13
  76. Rake Version: 13.0.6
  77. Redis Version: 6.2.11
  78. Sidekiq Version:6.5.7
  79. Go Version: unknown
  80. GitLab information
  81. Version: 16.0.4-jh
  82. Revision: c2ed99db36f
  83. Directory: /opt/gitlab/embedded/service/gitlab-rails
  84. DB Adapter: PostgreSQL
  85. DB Version: 13.11
  86. URL: http://myweb.first.com
  87. HTTP Clone URL: http://myweb.first.com/some-group/some-project.git
  88. SSH Clone URL: git@myweb.first.com:some-group/some-project.git
  89. Elasticsearch: no
  90. Geo: no
  91. Using LDAP: no
  92. Using Omniauth: yes
  93. Omniauth Providers:
  94. GitLab Shell
  95. Version: 14.20.0
  96. Repository storages:
  97. - default: unix:/var/opt/gitlab/gitaly/gitaly.socket
  98. GitLab Shell path: /opt/gitlab/embedded/service/gitlab-shell
2.部署Jenkins

   
   
  1. # Jenkins部署到k8s里
  2. # 1.安装git软件
  3. [root@k8smaster jenkins] # yum install git -y
  4. # 2.下载相关的yaml文件
  5. [root@k8smaster jenkins] # git clone https://github.com/scriptcamp/kubernetes-jenkins
  6. 正克隆到 'kubernetes-jenkins'...
  7. remote: Enumerating objects: 16, done.
  8. remote: Counting objects: 100% (7/7), done.
  9. remote: Compressing objects: 100% (7/7), done.
  10. remote: Total 16 (delta 1), reused 0 (delta 0), pack-reused 9
  11. Unpacking objects: 100% (16/16), done.
  12. [root@k8smaster jenkins] # ls
  13. kubernetes-jenkins
  14. [root@k8smaster jenkins] # cd kubernetes-jenkins/
  15. [root@k8smaster kubernetes-jenkins] # ls
  16. deployment.yaml namespace.yaml README.md serviceAccount.yaml service.yaml volume.yaml
  17. # 3.创建命名空间
  18. [root@k8smaster kubernetes-jenkins] # cat namespace.yaml
  19. apiVersion: v1
  20. kind: Namespace
  21. metadata:
  22. name: devops-tools
  23. [root@k8smaster kubernetes-jenkins] # kubectl apply -f namespace.yaml
  24. namespace/devops-tools created
  25. [root@k8smaster kubernetes-jenkins] # kubectl get ns
  26. NAME STATUS AGE
  27. default Active 22h
  28. devops-tools Active 19s
  29. ingress-nginx Active 139m
  30. kube-node-lease Active 22h
  31. kube-public Active 22h
  32. kube-system Active 22h
  33. # 4.创建服务账号,集群角色,绑定
  34. [root@k8smaster kubernetes-jenkins] # cat serviceAccount.yaml
  35. ---
  36. apiVersion: rbac.authorization.k8s.io/v1
  37. kind: ClusterRole
  38. metadata:
  39. name: jenkins-admin
  40. rules:
  41. - apiGroups: [ ""]
  42. resources: [ "*"]
  43. verbs: [ "*"]
  44. ---
  45. apiVersion: v1
  46. kind: ServiceAccount
  47. metadata:
  48. name: jenkins-admin
  49. namespace: devops-tools
  50. ---
  51. apiVersion: rbac.authorization.k8s.io/v1
  52. kind: ClusterRoleBinding
  53. metadata:
  54. name: jenkins-admin
  55. roleRef:
  56. apiGroup: rbac.authorization.k8s.io
  57. kind: ClusterRole
  58. name: jenkins-admin
  59. subjects:
  60. - kind: ServiceAccount
  61. name: jenkins-admin
  62. [root@k8smaster kubernetes-jenkins] # kubectl apply -f serviceAccount.yaml
  63. clusterrole.rbac.authorization.k8s.io/jenkins-admin created
  64. serviceaccount/jenkins-admin created
  65. clusterrolebinding.rbac.authorization.k8s.io/jenkins-admin created
  66. # 5.创建卷,用来存放数据
  67. [root@k8smaster kubernetes-jenkins] # cat volume.yaml
  68. kind: StorageClass
  69. apiVersion: storage.k8s.io/v1
  70. metadata:
  71. name: local-storage
  72. provisioner: kubernetes.io/no-provisioner
  73. volumeBindingMode: WaitForFirstConsumer
  74. ---
  75. apiVersion: v1
  76. kind: PersistentVolume
  77. metadata:
  78. name: jenkins-pv-volume
  79. labels:
  80. type: local
  81. spec:
  82. storageClassName: local-storage
  83. claimRef:
  84. name: jenkins-pv-claim
  85. namespace: devops-tools
  86. capacity:
  87. storage: 10Gi
  88. accessModes:
  89. - ReadWriteOnce
  90. local:
  91. path: /mnt
  92. nodeAffinity:
  93. required:
  94. nodeSelectorTerms:
  95. - matchExpressions:
  96. - key: kubernetes.io/hostname
  97. operator: In
  98. values:
  99. - k8snode1 # 需要修改为k8s里的node节点的名字
  100. ---
  101. apiVersion: v1
  102. kind: PersistentVolumeClaim
  103. metadata:
  104. name: jenkins-pv-claim
  105. namespace: devops-tools
  106. spec:
  107. storageClassName: local-storage
  108. accessModes:
  109. - ReadWriteOnce
  110. resources:
  111. requests:
  112. storage: 3Gi
  113. [root@k8smaster kubernetes-jenkins] # kubectl apply -f volume.yaml
  114. storageclass.storage.k8s.io/local-storage created
  115. persistentvolume/jenkins-pv-volume created
  116. persistentvolumeclaim/jenkins-pv-claim created
  117. [root@k8smaster kubernetes-jenkins] # kubectl get pv
  118. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  119. jenkins-pv-volume 10Gi RWO Retain Bound devops-tools/jenkins-pv-claim local-storage 33s
  120. pv-web 10Gi RWX Retain Bound default/pvc-web nfs 21h
  121. [root@k8smaster kubernetes-jenkins] # kubectl describe pv jenkins-pv-volume
  122. Name: jenkins-pv-volume
  123. Labels: type= local
  124. Annotations: <none>
  125. Finalizers: [kubernetes.io/pv-protection]
  126. StorageClass: local-storage
  127. Status: Bound
  128. Claim: devops-tools/jenkins-pv-claim
  129. Reclaim Policy: Retain
  130. Access Modes: RWO
  131. VolumeMode: Filesystem
  132. Capacity: 10Gi
  133. Node Affinity:
  134. Required Terms:
  135. Term 0: kubernetes.io/hostname in [k8snode1]
  136. Message:
  137. Source:
  138. Type: LocalVolume (a persistent volume backed by local storage on a node)
  139. Path: /mnt
  140. Events: <none>
  141. # 6.部署Jenkins
  142. [root@k8smaster kubernetes-jenkins] # cat deployment.yaml
  143. apiVersion: apps/v1
  144. kind: Deployment
  145. metadata:
  146. name: jenkins
  147. namespace: devops-tools
  148. spec:
  149. replicas: 1
  150. selector:
  151. matchLabels:
  152. app: jenkins-server
  153. template:
  154. metadata:
  155. labels:
  156. app: jenkins-server
  157. spec:
  158. securityContext:
  159. fsGroup: 1000
  160. runAsUser: 1000
  161. serviceAccountName: jenkins-admin
  162. containers:
  163. - name: jenkins
  164. image: jenkins/jenkins:lts
  165. imagePullPolicy: IfNotPresent
  166. resources:
  167. limits:
  168. memory: "2Gi"
  169. cpu: "1000m"
  170. requests:
  171. memory: "500Mi"
  172. cpu: "500m"
  173. ports:
  174. - name: httpport
  175. containerPort: 8080
  176. - name: jnlpport
  177. containerPort: 50000
  178. livenessProbe:
  179. httpGet:
  180. path: "/login"
  181. port: 8080
  182. initialDelaySeconds: 90
  183. periodSeconds: 10
  184. timeoutSeconds: 5
  185. failureThreshold: 5
  186. readinessProbe:
  187. httpGet:
  188. path: "/login"
  189. port: 8080
  190. initialDelaySeconds: 60
  191. periodSeconds: 10
  192. timeoutSeconds: 5
  193. failureThreshold: 3
  194. volumeMounts:
  195. - name: jenkins-data
  196. mountPath: /var/jenkins_home
  197. volumes:
  198. - name: jenkins-data
  199. persistentVolumeClaim:
  200. claimName: jenkins-pv-claim
  201. [root@k8smaster kubernetes-jenkins] # kubectl apply -f deployment.yaml
  202. deployment.apps/jenkins created
  203. [root@k8smaster kubernetes-jenkins] # kubectl get deploy -n devops-tools
  204. NAME READY UP-TO-DATE AVAILABLE AGE
  205. jenkins 1/1 1 1 5m36s
  206. [root@k8smaster kubernetes-jenkins] # kubectl get pod -n devops-tools
  207. NAME READY STATUS RESTARTS AGE
  208. jenkins-7fdc8dd5fd-bg66q 1/1 Running 0 19s
  209. # 7.启动服务发布Jenkins的pod
  210. [root@k8smaster kubernetes-jenkins] # cat service.yaml
  211. apiVersion: v1
  212. kind: Service
  213. metadata:
  214. name: jenkins-service
  215. namespace: devops-tools
  216. annotations:
  217. prometheus.io/scrape: 'true'
  218. prometheus.io/path: /
  219. prometheus.io/port: '8080'
  220. spec:
  221. selector:
  222. app: jenkins-server
  223. type: NodePort
  224. ports:
  225. - port: 8080
  226. targetPort: 8080
  227. nodePort: 32000
  228. [root@k8smaster kubernetes-jenkins] # kubectl apply -f service.yaml
  229. service/jenkins-service created
  230. [root@k8smaster kubernetes-jenkins] # kubectl get svc -n devops-tools
  231. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  232. jenkins-service NodePort 10.104.76.252 <none> 8080:32000/TCP 24s
  233. # 8.在Windows机器上访问Jenkins,宿主机ip+端口号
  234. http://192.168.2.104:32000/login?from=%2F
  235. # 9.进入pod里获取登录的密码
  236. [root@k8smaster kubernetes-jenkins] # kubectl exec -it jenkins-7fdc8dd5fd-bg66q -n devops-tools -- bash
  237. bash-5.1$ cat /var/jenkins_home/secrets/initialAdminPassword
  238. b0232e2dad164f89ad2221e4c46b0d46
  239. # 修改密码
  240. [root@k8smaster kubernetes-jenkins] # kubectl get pod -n devops-tools
  241. NAME READY STATUS RESTARTS AGE
  242. jenkins-7fdc8dd5fd-5nn7m 1/1 Running 0 91s
3.部署harbor

   
   
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值