一、kubernetes高可用部署

一、kubernetes高可用部署

安装方式分为kubeadm安装、和二进制安装

第一种:

1.1、基本环境配置

环境介绍:

​ master01~03 192.168.150.150~~152 master节点3

​ master-vip 192.168.150.200 keepalived虚拟IP

​ node01~02 192.168.150.153~~154 worker节点2

所有节点配置hosts,修改/etc/hosts如下:

## 修改hosts文件
vim /etc/hosts
192.168.150.150 master01
192.168.150.151 master02
192.168.150.152 master03
192.168.150.200 master-vip
192.168.150.153 node01
192.168.150.154 node02

##	每个主机,修改主机名,
hostnamectl set-hostname master01				#等等

Centos7安装Yum源如下

配置yum源:参考地址:https://blog.csdn.net/xiaojin21cen/article/details/84726193

 ##	安装软件
yum install -y yum-utils device-mapper-persistent-data lvm2

## 配置阿里云yum源
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo 
 
##	添加docker\yum源
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo


cat > /etc/yum.repos.d/kubernetes.repo  <<EOF 
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
 
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

其他环境配置

##	必备得工具安装
yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y

##	关闭selinux、防火墙、swap交换分区
systemctl disable --now firewalld
systemctl disable --now dnsmasq
systemctl disable --now NetworkManager
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab

##	安装ntpdate
rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm 
yum install ntpdate -y

##	所有节点同步时间
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' > /etc/timezone
ntpdate time2.aliyun.com

##	crontab时间任务
crontab -e
*/5 * * * * ntpdate time2.aliyun.com

cat /dev/null > /var/spool/mail/root
echo "unset MAILCHECK" >> /etc/profile

##	所有节点配置limit
ulimit -SHn 65535

vim /etc/security/limits.conf
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited

master01上操作

Master01节点免密钥登录其他节点,安装过程中生成配置文件和证书均在Master01上操作

##	生成配置密钥
ssh-keygen -t rsa

for i in master01 master02 master03 node01 node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done

cd /root
git clone https://github.com/dotbalo/k8s-ha-install.git

##	所有节点升级系统并重启,此处升级没有升级内核,下节会单独升级内核:
yum update -y --exclude=kernel* && reboot

1.2、系统及内核配置

升级系统内核版本

##安装参考地址:https://www.cnblogs.com/jinyuanliu/p/10368780.html

##	CentOS 7.x 系统自带的 3.10.x 内核存在一些 Bugs,导致运行的 Docker、Kubernetes 不稳定
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm 
##	安装完成后检查 /boot/grub2/grub.cfg 中对应内核 menuentry 中是否包含 initrd16 配置,如果没有,再安装一次! 
yum --enablerepo=elrepo-kernel install -y kernel-lt

##	设置开机从新内核启动 
grub2-set-default 'CentOS Linux (5.4.113-1.el7.elrepo.x86_64) 7 (Core)'


安装ipvsadm、并配置参数

yum install ipset ipvsadm sysstat conntrack libseccomp -y
##	开启ipvs
##	所有节点配置ipvs模块,在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack,4.18以下使用nf_conntrack_ipv4即可
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack			#我这儿是4.19以上

vim /etc/modules-load.d/ipvs.conf
#加如一下内容
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_lblcr
ip_vs_lblc
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
#linux内核版本4.18及以下使用 nf_conntrack_ipv4
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip

##	然后执行
systemctl enable --now systemd-modules-load.service

##	验证是否加载
lsmod | grep -e ip_vs -e nf_conntrack

开启一些k8s集群中必须的内核参数,所有节点配置k8s内核(直接全部复制、执行)

cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
 
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
sysctl --system

1.3、基本组件安装

安装docker

##	etc/sysctl.conf文件添加参数
cat << EOF >>/etc/sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl -p

##	所有节点安装docker
yum -y install docker-ce-19.03.*

##	温馨提示:
##	由于新版kubelet建议使用systemd,所以可以把docker的CgroupDriver改成systemde
mkdir /etc/docker 
cat << EOF >>/etc/docker/daemon.json 
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

systemctl daemon-reload && systemctl enable --now docker
docker info 				##检查不能有警告信息,有就百度处理

#echo "unset MAILCHECK" >> /etc/profile
#source /etc/profile

安装k8s组件

##	查看k8s可安装的版本
yum list kubeadm.x86_64 --showduplicates | sort -r

##	所有节点安装指定版本的K8s组件
yum install -y kubeadm-1.19.3-0.x86_64 kubectl-1.19.3-0.x86_64 kubelet-1.19.3-0.x86_64
#也可以安装最新版本的kubeadm,会把依赖也装上,比如kubectl、kubelet等
#如:yum install kubeadm -y

##	默认配置的pause镜像使用gcr.io仓库,国内可能无法访问,所以这里配置Kubelet使用阿里云的pause镜像:
cat > /etc/sysconfig/kubelet <<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2"
EOF

##	设置kubelet开机自启动
systemctl daemon-reload
systemctl enable --now kubelet

1.4、高可用组件安装

注意:如果不是高可用集群,haproxy和 beepalived,无需安装

##	所有Master节点通过yum安装HAProxy和KeepAlived
yum install keepalived haproxy -y

Haproxy配置

##	所有Master节点配置HAProxy(详细配置参考HAProxy文档,所有Master节点的HAProxy配置相同)
cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
vim /etc/haproxy/haproxy.cfg 

global
  log 127.0.0.1 local0 err
  ulimit-n 16384
  maxconn 2000
  stats timeout 30s

defaults
  mode                    http
  log                     global
  option                  httplog
  timeout http-request    15s
  timeout connect         5000
  timeout client          50000
  timeout server          50000
  timeout http-keep-alive 15s

frontend monitor-in
  bind *:33305
  mode http
  option httplog
  monitor-uri /monitor

frontend k8s-master
  bind 0.0.0.0:16443
  bind 127.0.0.1:16443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-master

backend k8s-master
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server master01  192.168.150.150:6443  check
  server master02  192.168.150.151:6443  check
  server master03  192.168.150.152:6443  check

Keppalived配置

所有Master节点配置KeepAlived.配置不一样,注意每个节点的IP和网卡(interface参数)

**Master01节点**的配置

vim /etc/keepalived/keepalived.conf 

! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
script_user root
    enable_script_security
}

vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5
    weight -5
    fall 2
rise 1
}

vrrp_instance VI_1 {
    state MASTER         #备服务器上改为BACKUP
    interface ens33        #改为自己的接口
    mcast_src_ip 192.168.150.150
    virtual_router_id 51
    priority 100         #备服务器上改为小于100的数字,90,80
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass k8SHA_KA_AUTH

    }
    virtual_ipaddress {
        192.168.150.200          #虚拟vip,自己设定
    }
    track_script {
       chk_apiserver
    }
}

**Master02节点**的配置

! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
script_user root
    enable-script-security
}

vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5
    weight -5
    fall 2
rise 1
}

vrrp_instance VI_1 {
    state BACKUP         #备服务器上改为BACKUP
    interface ens33        #改为自己的接口
    mcast_src_ip 192.168.150.151
    virtual_router_id 51
    priority 90         #备服务器上改为小于100的数字,90,80
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass k8SHA_KA_AUTH

    }
    virtual_ipaddress {
        192.168.150.200          #虚拟vip,自己设定
    }
    track_script {
       chk_apiserver
    }
}

##	Master03节点的配置
##	配置与二基本相同

添加健康检查脚本

vim /etc/keepalived/check_apiserver.sh 

#!/bin/bash
erro=0
for k in $(seq 1 3)
do
    check_code=$(pgrep haproxy)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 1
        continue
    else
        err=0
        break
    fi
done

if [[ $err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi

##	添加文件权限、启动服务
chmod +x /etc/keepalived/check_apiserver.sh 			
systemctl daemon-reload
systemctl enable --now haproxy
systemctl enable --now keepalived

###验证、测试
netstat -anpt					##查看16443端口是否开启
ping  vip地址						##能否平通

##	如果ping不通且telnet没有出现],则认为VIP不可以,不可在继续往下执行,需要排查keepalived的问题,比如防火墙和selinux.haproxy和keepalived,的状态,监听端口等e所有节点查看防火墙状态必须为disable和inactive:systemctl status firewallde所有节点查看selinux状态,必须为disable:getenforce master节点查看haproxy和keepalived状态:systemctl status keepalived haproxye master节点查看监听端口:netstat -lntpt

1.5、集群初始化

master节点配置

所有节点创建初始化文件kubeadm-config.yaml

##	查看kubeadm版本,并写入文件
kubeadm version	

##	所有节点获取初始化文件
kubeadm config print init-defaults > kubeadm-config.yaml

##	所有节点修改初始化文件
vim kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef			#可能不一样,需修改
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.150.150
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: master01
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  certSANs:
  - 192.168.150.200
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.150.200:16443
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.19.0
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
scheduler: {}

##	如果将来出了新版本配置文件过时,则使用以下命令转换一下:更新kubeadm文件
# kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml


##	查看需要那些镜像


##	先把初始化所需的镜像拉取下来
##	kubeadm config images pull --config /root/new.yaml
kubeadm config images pull --config /root/kubeadm-config.yaml   ##我这没有转换

集群初始化

##	在master01节点操作
kubeadm init --config kubeadm-config.yaml --upload-certs

##	返会信息

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

## master节点加入、才能使用kubectl,每个master不一样
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
##
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:
##	master节点加入	
    kubeadm join 192.168.150.200:16443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:1727eb22c3e8ce8ffd83d623f7fc176d40e65c25eb37dfef33ea963ecdf0866b \
    --control-plane --certificate-key ef7f9e217e6d9deee69028da6bde6ff42f3f4c786a4b644c7e210a2c7194afaf
##

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:
##	node节点加入
kubeadm join 192.168.150.200:16443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:1727eb22c3e8ce8ffd83d623f7fc176d40e65c25eb37dfef33ea963ecdf0866b 

token过期解决方法

就是我们初始化集群的token、默认保存时间为两小时,如果以后还需要加入节点那么我们就需要重新生成token

##	node节点	
kubeadm token create --print-join-command			

##	master节点+上面node节点	
kubeadm init phase upload-certs --upload-certs	

1.6、网络配置

安装calico网络组件(master01操作

##	可以拉取calico代码文件
git clone https://github.com/dotbalo/k8s-ha-install.git			//拉取
cd k8s-ha-install/
git checkout manual-installation-v1.20.x					//切换分支
cd calico/


##	我这里直接上传calico-etcd.yaml文件
[root@master01 ~]# ls calico-etcd.yaml 
calico-etcd.yaml

sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://192.168.150.150:2379,https://192.168.150.151:2379,https://192.168.150.152:2379"#g' calico-etcd.yaml 

ETCD_CA=`cat /etc/kubernetes/pki/etcd/ca.crt | base64 | tr -d '\n'`
ETCD_CERT=`cat /etc/kubernetes/pki/etcd/server.crt | base64 | tr -d '\n'`
ETCD_KEY=`cat /etc/kubernetes/pki/etcd/server.key | base64 | tr -d '\n'`
sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml

sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml 

POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'`

sed -i 's@# - name: CALICO_IPV$POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@# value: "192.168.0.0/16"@ value: '"${POD_SUBNET}"'@g' calico-etcd.yaml

##	创建网络
kubectl apply -f calico-etcd.yaml

##	查看calico是否启动
kubectl get pods -n kube-system -o wide	

[root@master01 ~]# kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-5f6d4b864b-jz5g6   1/1     Running   0          110s
kube-system   calico-node-9rmc6                          1/1     Running   0          110s
kube-system   calico-node-9w49p                          1/1     Running   0          110s
kube-system   calico-node-d2mlc                          1/1     Running   0          110s
kube-system   calico-node-p6qsl                          1/1     Running   0          110s
kube-system   calico-node-xb92b                          1/1     Running   0          110s
kube-system   coredns-6c76c8bb89-99xjn                   1/1     Running   0          42h
kube-system   coredns-6c76c8bb89-t5wx4                   1/1     Running   0          42h
kube-system   etcd-master01                              1/1     Running   1          42h
kube-system   etcd-master02                              1/1     Running   1          42h
kube-system   etcd-master03                              1/1     Running   7          42h
kube-system   kube-apiserver-master01                    1/1     Running   1          42h
kube-system   kube-apiserver-master02                    1/1     Running   1          42h
kube-system   kube-apiserver-master03                    1/1     Running   11         41h
kube-system   kube-controller-manager-master01           1/1     Running   2          42h
kube-system   kube-controller-manager-master02           1/1     Running   0          42h
kube-system   kube-controller-manager-master03           1/1     Running   0          41h
kube-system   kube-proxy-8sh5s                           1/1     Running   0          41h
kube-system   kube-proxy-dcqjn                           1/1     Running   0          42h
kube-system   kube-proxy-j8q4x                           1/1     Running   0          42h
kube-system   kube-proxy-jvwcx                           1/1     Running   0          41h
kube-system   kube-proxy-lgc7r                           1/1     Running   0          41h
kube-system   kube-scheduler-master01                    1/1     Running   2          42h
kube-system   kube-scheduler-master02                    1/1     Running   0          42h
kube-system   kube-scheduler-master03                    1/1     Running   0          41h
[root@master01 ~]# kubectl get nodes
NAME       STATUS   ROLES    AGE   VERSION
master01   Ready    master   42h   v1.19.3
master02   Ready    master   42h   v1.19.3
master03   Ready    master   42h   v1.19.3
node01     Ready    <none>   41h   v1.19.3
node02     Ready    <none>   41h   v1.19.3

1.7、安装可视化web页面

安装Metrics和Dashboard、kuboard

安装Metricsmaster01操作

在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Merris 采集节点和Pod的内存、磁盘、CPU和网络的使用率

metrics-server的github地址:https://github.com/kubernetes-sigs/metrics-server

##	将master01节点下的/etc/kubernetes/pki/front-proxy-ca.crt复制到其他node节点
scp /etc/kubernetes/pki/front-proxy-ca.crt node01:/etc/kubernetes/pki/front-proxy-ca.crt
scp /etc/kubernetes/pki/front-proxy-ca.crt node02:/etc/kubernetes/pki/front-proxy-ca.crt

##	安装metrics,可以通过官网下载,然后上床文件来安装
##	切换到我们刚才拉取代码的分支
[root@master01 ~]# ls k8s-ha-install/metrics-server-0.4.x-kubeadm/
comp.yaml					

##	安装metrics
kubectl create -f comp.yaml 

##	查看状态、能否拉取数据
[root@master01 metrics-server-0.4.x-kubeadm]# kubectl top node
NAME       CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
master01   1074m        53%    1186Mi          63%       
master02   1350m        67%    1236Mi          66%       
master03   1060m        53%    1168Mi          62%       
node01     467m         23%    897Mi           48%       
node02     441m         22%    836Mi           45%  

安装Dashboard

Dashboard用于展示集群中的各类资源,同时也可以通过Dashboard实时查看Pod的日志和在容器中执行一些命令等。

文件下载地址:https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml

##	我们使用刚才拉取下来的代码文件
[root@master01 ~]# cd k8s-ha-install/dashboard/ && ls
dashboard-user.yaml  dashboard.yaml

##	安装Dashboard
kubectl create -f .


##	查看安装
[root@master01 dashboard]# kubectl get pods -A |grep dashboard
kubernetes-dashboard   dashboard-metrics-scraper-7645f69d8c-brm9p   1/1     Running   0 
kubernetes-dashboard   kubernetes-dashboard-78cb679857-q5kwm        1/1     Running   0 

##	修改svc服务类型为NodePort
kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

type: NodePort				#修改字段


##	访问测试
https://192.168.150.200:32619/ 		##端口可能不一样 通过kubectl get svc -A 查看

##	登录需要获取token
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')




##	将kube-proxy修改为ipvs模式
##	将Kube-proxy改为ipys模式,因为在初始化集群的时候注释了ipys配置,所以需要自行修改一下:
kubectl edit cm kube-proxy -n kube-system
mode: “ipvs”

更新kube-proxy
kubectl patch daemonset kube-proxy -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" -n kube-system

验证kube-proxy工作模式
[root@master01 system]# curl 127.0.0.1:10249/proxyMode
ipvs[root@master01 system]# 

删除污点

安装k8s时我们给master节点添加了一个污点,在实际工作中为节省资源需要让master也能部署pod

##	查看有污点的master
kubectl describe node -l node-role.kubernetes.io/master |grep node-role.kubernetes.io/master:NoSchedule

##删除污点
kubectl taint node -l node-role.kubernetes.io/master node-role.kubernetes.io/master:NoSchedule-



##	集群验证
Ping集群Pod是否能正常通信
查看集群pod

Master01安装kuboard

Kuboard 是一款基于 Kubernetes 的微服务管理界面。目的是帮助用户快速在 Kubernetes 上落地微服务。

Kuboard支持中文,功能比较完善。

官网:https://www.kuboard.cn/

官方安装文档:https://kuboard.cn/install/install-dashboard.html#%E5%85%BC%E5%AE%B9%E6%80%A7

##	安装kubectl
kubectl apply -f https://kuboard.cn/install-script/kuboard.yaml
##	上面已安装
##	kubectl apply -f https://addons.kuboard.cn/metrics-server/0.3.7/metrics-server.yaml

##查看Kuboard状态
[root@master01 ~]# kubectl get pods -l k8s.kuboard.cn/name=kuboard -n kube-system
NAME                       READY   STATUS    RESTARTS   AGE
kuboard-74c645f5df-7dzln   1/1     Running   0          2m34s

##	查看端口号
kubectl get service -A |grep kuboard

##	登录验证、masterIP或VIP+端口
##	登录时获取token、在第一个 Master 节点上执行此命令
echo $(kubectl -n kube-system get secret $(kubectl -n kube-system get secret | grep kuboard-user | awk '{print $1}') -o go-template='{{.data.token}}' | base64 -d)

二、kubernetes基础

Kubernetes是谷歌以Borg为前身,基于谷歌15年生产环境经验的基础上开源的一个项目,Kubernetes致力于提供跨主机集群的自动部署、扩展、高可用以及运行应用程序容器的平台。

2.1、Mater节点

组件:

**Api server:**提供一个接口,所有访问ETCD的k8s集群组件都需要经过它

Controller Manager(控制器管理器):负责集群内的Node节点、Pod副本,服务端点,命名空间,服务账号,资源配额管理,维护集群的状态

**Scheduler:**理解为调度器,根据算法把pod调度到哪一个node节点上

**Etcd:**键值数据库,保存集群中所有网络配置和对象状态

2.2、Node节点

组件

**Kubelet:**每个node节点都会启动一个kubelet进程,用来处理master节点分配的任务,管理pod和其中的容器

**Kube-proxy:**运行在所有Node节点上,它监听每个节点上的API中定义的服务情况,并创建路由规则来进行服务负载均衡

2.3、其他组件

**Calico:**符合CNI标准的网络插件,给每个Pod生成一个唯一的IP地址,并且把每个节点当做一个路由器。Cilium

**CoreDNS:**用于Kubernetes集群内部Service的解析,可以让Pod把Service名称解析成IP地址,然后通过Service的IP地址进行连接到对应的应用上。

Docker引擎:负责容器的创建和管理

2.4、Pod的概念

Pod是Kubernetes最基本的操作单元,一个Pod中可以包含一个或多个紧密相关的容器,一个Pod可以被一个容器化的环境看作应用层的逻辑宿主机

Pod在Node节点上被创建、启动和销毁。并运行着一个特殊的Pause容器,其他容器则称为业务容器。同一个Pod中的业务容器共享Pause容器的IP地址和挂载的存储卷

pod主要组成部分

metadata:Pod的名称、命名空间、标签等
spec:Pod中容器的信息如:容器的名称、容器的镜像、卷等
status:Pod的当前信息如:容器的状态、IP地址等
##可以用命令kubectl explain pods.字段###来查看帮助

创建一个简单的Pod

[root@master01 ~]# vim pod-my.yaml 

apiVersion: v1
kind: Pod
metadata:
  name: my-nginx
spec:
  containers:
  - name: nginx
    image: nginx
    ports:
    - containerPort: 80

##	创建pod
kubectl create -f pod-my.yaml 

##	查看pod
kubectl get pods 

##	yaml格式输出详细内容
kubectl get pod pod名称 -o yaml

查看pod中应用日志


kubectl logs Pod的名称						##查看Pod的访问日志
kubectl logs pod名称 -c 容器名称			  ##Pod中多容器访问日志
kubectl port-forward Pod名称 8999:80		 ##可以将本地网络端口转发到Pod中的端口
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值