目录
- jumpserver安装使用
- jumpserver实现开发用户tom可以上主机使用docker命令。实现运维用户可以上主机配置iptables, 使用docker;
- kubernetes 组件原理
- kubernetes资源类型解释
- kubernetes安装高可用的集群,并实现升级k8s集群
- kubernetes 结合pod创建流程说明livenessprobe, readinessprobe作用
1. jumpserver安装使用
脑图:
修改主机名
vim /etc/hostname
Jumpserver.example.com
重启生效 102改名:mysql.example.com
部署mysql(略) 10.0.0.152
配置redis 10.0.0.153
远端telnet测试
telnet 10.0.0.153 6379
部署jumpserver 10.0.0.151
cd /home/yangk1/
ll
docker load -i jumpserver_v1.5.9_all-docker-image.tar.gz
生成秘钥
if [ "$BOOTSTRAP_TOKEN" = "" ]; then BOOTSTRAP_TOKEN=cat /dev/urandom | tr -dc A-Za-z0-9 | head -c 16; echo "BOOTSTRAP_TOKEN=$BOOTSTRAP_TOKEN" >> ~/.bashrc; echo $BOOTSTRAP_TOKEN; else echo $BOOTSTRAP_TOKEN; fi
启动
docker run -d --name jms_all \
-v /data/jumpserver:/opt/jumpserver/data/media \
-p 80:80 \
-p 2222:2222 \
-e SECRET_KEY=$SECRET_KEY \
-e BOOTSTRAP_TOKEN=$BOOTSTRAP_TOKEN \
-e DB_HOST=连接数据库地址\
-e DB_PORT=3306 \
-e DB_USER=jumpserver \
-e DB_PASSWORD="连接数据库密码" \
-e DB_NAME=jumpserver \
-e REDIS_HOST=连接数据库地址 \
-e REDIS_PORT=6379 \
-e REDIS_PASSWORD="123456" \
jumpserver/jms_all
访问测试 10.0.0.151
2. jumpserver实现开发用户tom可以上主机使用docker命令。实现运维用户可以上主机配置iptables, 使用docker;
输入账号密码:admin
先创建组
root@10ubuntu-yangk:~# curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
root@10ubuntu-yangk:~# systemctl start docker
root@10ubuntu-yangk:~# docker pull jumpserver/jms_all:latest
root@10ubuntu-yangk:~# docker images
if [ "$SECRET_KEY" = "" ]; then \
> SECRET_KEY=`cat /dev/urandom | \
> tr -dc A-Za-z0-9 | \
> head -c 50`; \
> echo "SECRET_KEY=$SECRET_KEY" >> ~/.bashrc; \
> echo $SECRET_KEY; else echo $SECRET_KEY; \
> fi
生成token
if [ "$BOOTSTRAP_TOKEN" = "" ]; then \
> BOOTSTRAP_TOKEN=`cat /dev/urandom | \
> tr -dc A-Za-z0-9 | \
> head -c 16`; \
> echo "BOOTSTRAP_TOKEN=$BOOTSTRAP_TOKEN" >> ~/.bashrc; \
> echo $BOOTSTRAP_TOKEN; else echo $BOOTSTRAP_TOKEN; \
> fi
启动jumpserver
docker run -it -d --name jumpserver_all \
> -v /opt/jumpserver:/opt/jumpserver/data/media \
> -p 80:80 \
> -p 2222:2222 \
> -e SECRET_KEY=ZIvo81YHqlxM42LmrSvcAqofQOeXiDfIRruaq9OTaPIE5SAGVE \
> -e BOOTSTRAP_TOKEN=6AKDZriHLlWuSaGK \
> -e REDIS_HOST=10.0.0.153 \
> -e REDIS_PORT=6379 \
> -e REDIS_PASSWORD= \
> jumpserver/jms_all:latest
ip a|sed -n '9p'
创建普通用户:
其可以创建节点:
达成
3. kubernetes 组件原理
pod:豌豆荚,运行k8s的最小单元,可以有一个或者多个。方便k8s的调度以及网络的配置。
kube-scheduler是k8s的pod调度器,负责调度到合法的节点上,增加约束,感知拓扑的变化
kube-controller-manager:集群内部的管理控制中心,负责集群内部的node、pod副本、服务端点、命名空间、服务账号、资源定额的管理,某个node宕机时,自动修复,确保处于工作状态。
以上三个组件k8s管理端必不可少,如果高可用还得再装
节点组件:
kube-proxy:网络代理,需使用apiserver API 创建一个服务来配置代理,
kubelet:运行在每个worker节点的代理组件,监听分配给节点的pod:状态、接收指令创建docker容器、准备pod所需的数据卷,返回pod的运行状态、健康检查。
etcd:k8s的数据存储系统,用于保存所有集群数据,支持分布式集群功能,生产环境需要备份。
可选组件:
4. kubernetes资源类型解释
一、k8s资源类型简介
Deployment、Service、Pod是k8s最核心的3个资源对象。
Deployment:最常见的无状态应用的控制器,支持应用的扩缩容、滚动更新等操作。
实例:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: default
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.18.0
ports:
- containerPort: 80
Servcie:为弹性变动且存在生命周期的Pod对象提供了一个固定的访问接口,用于服务发现和服务访问。
实例:
kind: Service
apiVersion: v1
metadata:
name: test-nginx-service
labels:
app: test-nginx-service-label
namespace: default
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
nodePort: 30004
selector:
app: nginx
Pod:是运行容器以及调度的最小单位。同一个Pod可以同时运行多个容器,这些容器共享NET、UTS、IPC。除此之外还有USER、PID、MOUNT。
5. kubernetes安装高可用的集群,并实现升级k8s集群
注意事项
禁用swap
关闭selinux
关闭iptables,
优化内核参数及资源限制参数
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1#二层的网桥在转发包时会被宿主机iptables的FORWARD规则匹配
模板机禁用Swap
vim /etc/fstab
reboot
free -m
模板机关闭iptables:
root@10ubuntu-yangk:~# cat /etc/sysctl.conf
root@10ubuntu-yangk:~# vim /etc/sysctl.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-arptables = 1
net.ipv4.tcp_tw_reuse = 0
net.core.somaxconn = 32768
net.netfilter.nf_conntrack_max=1000000
vm.swappiness = 0
vm.max_map_count = 655360
fs.file-max = 6553600
模板机修改资源限制:
root@10ubuntu-yangk:~# vim /etc/security/limits.conf
* soft core unlimited
* hard core unlimited
* soft nproc 1000000
* hard nproc 1000000
* soft nofile 1000000
* hard nofile 1000000
* soft memlock 32000
* hard memlock 32000
* soft msgqueue 8192000
* hard msgqueue 8192000
root soft core unlimited
root hard core unlimited
root soft nproc 1000000
root hard nproc 1000000
root soft nofile 1000000
root hard nofile 1000000
root soft memlock 32000
root hard memlock 32000
root soft msgqueue 8192000
root hard msgqueue 8192000
重启,镜像,克隆,master是奇数1357,克隆三个
服务器集群规划:
master三个10.0.0.201/202/203
haproxy 两个(k8s/openstack)负载均衡器混用,最好分开:10.0.0.204/205
harbor 10.0.0.206
node 两个(至少):10.0.0.207/208/209
克隆完master后改地址,报错:
E303: Unable to open swap file for “/etc/netplan/01-netcfg.yaml”, recovery impossible
解决:
把根目录文件系统设为可读写,就可以编辑了
mount -n -o remount,rw /
安装并修改三台master名称
root@10ubuntu-yangk:~# vim /etc/hostname
k8s-server1.example.com
k8s-server2.example.com
k8s-server3.example.com
HAproxy服务器204、205:两个开一个就行,
k8s-ha1.example.com
k8s-ha2.example.com
k8s-harbor.example.com
安装keepalived haproxy
apt install keepalived haproxy -y
找到keepalived配置文件进行拷贝
find / -name keepalived.*
在这里:/usr/share/doc/keepalived/samples/keepalived.conf.vrrp
把这个模板拷到etc目录:
root@k8s-ha1:~# cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/keepalived.conf
简单改一下:主要是生成一个vip
root@k8s-ha1:~# vim /etc/keepalived//keepalived.conf
后面全删 退出插入模式,然后d+G
修改这一行: 10.0.0.188 dev ens33 label ens33:1
重启keepalived
root@k8s-ha1:~# systemctl restart keepalived
root@k8s-ha1:~# systemctl status keepalived
配置另一台keepalived(略)
配置haproxy204:
vim /etc/haproxy/haproxy.cfg
把6443转发请求发到master上去,在最后面加一个listen
listen k8s-apiserver-6443
bind 10.0.0.188:6443
mode tcp
balance source
server 10.0.0.201 10.0.0.201:6443 check inter 3s fall 3 rise 5
# server 10.0.0.202 10.0.0.202:6443 check inter 3s fall 3 rise 5
# server 10.0.0.203 10.0.0.203:6443 check inter 3s fall 3 rise 5
重启haproxy,10.0.0.201将成为这个环境的第一个master
将俩服务设置为开机自启动
root@k8s-ha1:/etc/keepalived# systemctl enable haproxy keepalived
验证
访问188的状态页,http://10.0.0.188:9999/haproxy-status
stats uri /haproxy-status
stats auth haadmin:q1w2e3r4ys
账号:haadmin,密码:q1w2e3r4ys
安装harbor206:
先安装docker,镜像本来就包含,就装2.2.1
还要装docker-compose
安装compose
验证:
访问:http://harbor.magedu.com/
账号:admin,密码:123456,扫描器也OK
三台node 在每个master节点和node节点安装经过验证的docker(版本不要太新)
剩下的6台机器:
root@k8s-server1:~# apt-get update
#安装必要的一些系统工具
sudo apt-get update
apt -y install apt-transport-https ca-certificates curl software-properties-common
#安装GPG证书(校验安装包是否完整)
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
写入软件源信息
add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
更新软件源
apt-get -y update
#查看可安装的Docker版本
apt-cache madison docker-ce docker-ce-cli
#安装并启动docker 19.03.8:
apt install-y docker-ce=5:19.03.15-3-0-ubuntu-bionic docker-ce-
c1i=5:19.03.15-3-0-ubuntu-bionic
systemctl start docker && systemctl enable docker
#验证docker版本:
docker version
拿到docker的参数:添加自己的harbor
dockerd --help |grep ins
--insecure-registry list Enable insecure registry communication
在docker.service加上我们自己的镜像仓库
root@k8s-server1:~# vim /lib/systemd/system/docker.service
--insecure-registry harbor.magedu.com
每台node机子都加上这句,等会儿一块重启,还有master也加
重启docker,让配置生效
root@k8s-server2:~# systemctl daemon-reload && systemctl restart docker
每一台加解析:
root@k8s-node3:~# vim /etc/hosts
10.0.0.206 harbor.XXXXdu.com
验证:
报错:核心服务不可用
解决办法:
docker-compose down
docker-compose up -d
把harbor重启一下,好了
装指定版本的k8s 所有节点安装kubelet kubeadm kubectl,装k8s之前,要先装kubeadm,
apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
添加源:
root@k8s-server1:~# cat << EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
装1.20.5-00,后面要演示升级到1.20.6-00
#安装kubelet,kubeadm,kubectl
apt-get update
root@k8s-server1:~# apt-get install kubelet=1.20.5-00 kubeadm=1.20.5-00 kubectl=1.20.5-00 -y
node节点安装:
apt-get install kubelet=1.20.5-00 kubeadm=1.20.5-00
kubectl都装上吧,默认装上是用不了的,需要一个证书才能用
Setting up kubeadm (1.20.5-00) ...
root@k8s-server1:~# kubeadm version
注意:安装完最后会打印一个版本号,记住版本后面配置master的配置文件需要用到 Processing triggers for man-db (2.8.3-2ubuntu0.1) …
#设置kubectl开机启动并且启动kubelet
systemctl enable kubelet && systemctl start kubelet
命令安装高可用k8s
先写脚本提现下载相关组件
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0
可以写个脚本:
root@k8s-server1:~# vim image-down.sh
初始化高可用master:
kubeadm init --apiserver-advertise-address=10.0.0.201 --control-plane-endpoint=10.0.0.188 --apiserver-bind-port=6443 --kubernetes-version=v1.20.5 --pod-network-cidr=10.100.0.0/16 --service-cidr=10.200.0.0/16 --service-dns-domain=magedu.local --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --ignore-preflight-errors=swap
成功,但多了一行提示:
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
记录token信息
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 10.0.0.188:6443 --token zqztzs.kal38czwczaf94fs \
--discovery-token-ca-cert-hash sha256:9797421148aecff91e5bda95369322724930fda50084c4180bca27de18bfe399 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.0.0.188:6443 --token zqztzs.kal38czwczaf94fs \
--discovery-token-ca-cert-hash sha256:9797421148aecff91e5bda95369322724930fda50084c4180bca27de18bfe399
添加网络组件
root@k8s-server1:~# mkdir m43
root@k8s-server1:~# cd m43/
root@k8s-server1:~/m43# wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
修改镜像地址:
![在这里插入图片描述](https://img-blog.csdnimg.cn/b93c68915e0e454e8aea26e53406c782.png)
### 添加新mater节点
```bash
kubeadm init phase upload-certs --upload-certs
添加master要指定的证书:
3c539228ba89f87bac5156bef00576e31f4270338187304272d68884cd5f5c7c
将下列命令在从master节点运行:
kubeadm join 10.0.0.188:6443 --token zqztzs.kal38czwczaf94fs \
--discovery-token-ca-cert-hash sha256:9797421148aecff91e5bda95369322724930fda50084c4180bca27de18bfe399 \
配置从节点 把config文件拷到node节点的.kube目录底下:
root@k8s-server1:~# ls -al
root@k8s-server1:~# cd .kube/
root@k8s-server1:~/.kube# ls
cache config
root@k8s-server1:~/.kube#
添加node节点
kubeadm join 10.0.0.188:6443 --token rw26wb.ftdcs9hzm6672db1 \
--discovery-token-ca-cert-hash sha256:876dceb63104f99b5a384e2d0ccfe4e9d2583a88909f688c4716f3a9907450f4
验证证书状态
root@k8s-server1:~/m43# kubectl get csr
全部ready
创建容器测试连通性
root@k8s-server1:~/.kube# kubectl run net-test1 --image=alpine sleep 360000
pod/net-test1 created
root@k8s-server1:~/.kube# kubectl run net-test2 --image=alpine sleep 360000
pod/net-test2 created
root@k8s-server1:~/.kube# kubectl get pod -A
部署dashboard
root@k8s-server1:~# cd m43/
root@k8s-server1:~/m43# ls
root@k8s-server1:~/m43# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
成功
实现高可用的k8s集群:
部署nginx
root@k8s-server1:~/m43# cat nginx-m43.yml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: default
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.18.0
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: test-nginx-service
labels:
app: test-nginx-service-label
namespace: default
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
nodePort: 30004
selector:
app: nginx
访问测试:
201上访问测试:http://10.0.0.201:30004/
http://10.0.0.208:30004/ http://10.0.0.207:30004/
达成
部署tomcat
root@k8s-server1:~/m43# vim tomcat-m43.yml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: default
name: tomcat-deployment
labels:
app: tomcat
spec:
replicas: 1
selector:
matchLabels:
app: tomcat
template:
metadata:
labels:
app: tomcat
spec:
containers:
- name: tomcat
image: tomcat
ports:
- containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
name: test-tomcat-service
labels:
app: test-tomcat-service-label
namespace: default
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
nodePort: 30005
selector:
app: tomcat
进入容器创建项目
root@tomcat-deployment-6c44f58b47-q9b8q:/usr/local/tomcat# cd webapps
root@tomcat-deployment-6c44f58b47-q9b8q:/usr/local/tomcat/webapps# mkdir m43
root@tomcat-deployment-6c44f58b47-q9b8q:/usr/local/tomcat/webapps# cd m43/
root@tomcat-deployment-6c44f58b47-q9b8q:/usr/local/tomcat/webapps/m43# echo "tomcat jsp web page" > index.jsp
访问测试:
http://10.0.0.208:30005/m43/
动静分离实现:配置nginx location
apt update
apt install vim -y
root@nginx-deployment-67dfd6c8f9-6sscz:/# vim /etc/nginx/conf.d/default.conf
一旦访问m43,就转给
nginx -s reload
测试访问:
http://10.0.0.208:30004/m43/index.jsp
OK了,访问nginx就会跳转到tomcat了
k8s升级
root@k8s-server1:~/m43# kubeadm upgrade plan
再修改下脚本:
vim image-down.sh
root@k8s-server1:~/m43# vim image-down.sh
root@k8s-server1:~/m43# bash image-down.sh
传到其他master:
kubeadm upgrade apply v1.20.15
root@k8s-server1:~/m43# kubeadm upgrade apply v1.20.6
升级成功
升级node节点
apt install kubelet=1.20.6-00 kubeadm=1.20.6-00 kubectl=1.20.6-00 -y
都装一下:
升级node节点:
apt install kubelet=1.20.6-00 kubeadm=1.20.6-00 kubectl=1.20.6-00 -y
root@k8s-server1:~/m43# kubectl get node
6. kubernetes 结合pod创建流程说明livenessprobe, readinessprobe作用
livenessProbe
#存活探针,检测容器容器是否正在运行,如果存活探测失败,则kubelet会杀死容器,并且容器将受到其重启策略的影响,如果容器不提供存活探针,则默认状态为success,1ivenessProbe用户控制是否重启pod。
5s探针检测一次URL,检测不到就会重启:
readinessProbe
#就绪探针,如果就绪探测失败,端点控制器将从与Pod匹配的所有Service的端点中删除该Pod的IP地址,初始延迟之前的就绪状态默认为Failure,如果容器不提供就绪探针,则默认状态为success,readinessProbe用于控制pod是否添加至service.
readinesssProbe:
不会重启:
二者的区别与联系
livenessProbe:
1.告诉kubelet对pod执行探针检测之后是否对pod进行重启
2.无论pod探测结果如何,都会把pod加入到service的endpoint中
readinessProbe:
1.即使探针探测失败也不会重启pod
2.探测失败后不会把pod地址加入到service的endpoint中
如果觉得对您有用,请点个赞哦♪(^∀^●) ↓↓↓