k8s集群搭建

一.虚拟机搭建

1.虚拟机环境要求
  • 操作系统 CentOS 7.4
  • 内存 2G 【至少】
  • CPU 2核【至少】
  • 硬盘 20G 【至少】
  • 桥接网络
2.修改主机名

如果不修改主机名称在初始化集群的过程中可能会报错,提示无法找到并无法连接到主机,但并不影响接下来的创建

1.查看主机名

使用hostnamectl或hostnamectl status查看主机信息

[root@bogon]# hostnamectl status 
   Static hostname: localhost.localdomain
Transient hostname: bogon
         Icon name: computer-vm
           Chassis: vm
        Machine ID: xxxxxxxxxx
           Boot ID: xxxxxxxxxx
    Virtualization: kvm
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 3.10.0-957.el7.x86_64
      Architecture: x86-64
2.设置主机名

使用hostnamectl的set-hostname命令设置主机名

[root@bogon]# hostnamectl set-hostname master
[root@bogon]# hostnamectl status 
   Static hostname: master
         Icon name: computer-vm
           Chassis: vm
        Machine ID: xxxxxxxxxxxxx
           Boot ID: xxxxxxxxxxx
    Virtualization: kvm
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 3.10.0-957.el7.x86_64
      Architecture: x86-64
[root@bogon]# 
  • 对比两次的hostname,使用hostnamectl set-hostname test后,Transient hostname已被删除。

  • 注意到hostname设置为test后,终端提示还是root@myhostname,退出重新登录即会显示新的hostname。

二.系统配置

因为我使用的是最小安装,所以在root模式下需要进行命令补全设置,以及安装wget和vim

yum -y install bash-completion
yum -y instal vim
yum -y install wget
1.禁用防火墙
[root@master ~]# systemctl stop firewalld.service
[root@master ~]# systemctl disable firewalld.service #禁止开机自启
2.安装必要组件

修改yum源,安装docker和k8s组件
kubelet-1.14.2、kubeadm-1.14.2、kubectl-1.14.2、docker-ce
master节点

[root@master ~]# cd /etc/yum.repos.d
[root@master yum.repos.d]# yum install -y wget
[root@master yum.repos.d]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@master yum.repos.d]# cat > kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
enabled=1
EOF
[root@master yum.repos.d]# yum repolist
[root@master yum.repos.d]# yum install -y kubelet-1.14.2 kubeadm-1.14.2 kubectl-1.14.2 docker-ce-18.09.1 docker-ce-cli-18.09.1

node节点
kubelet-1.14.2、kubeadm-1.14.2、docker-ce

[root@master ~]# cd /etc/yum.repos.d
[root@master ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@master ~]# vi kubernetes.repo
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
enabled=1
[root@master ~]# yum repolist
[root@master ~]# yum install -y kubelet-1.14.2 kubeadm-1.14.2 docker-ce-18.09.1 docker-ce-cli-18.09.1
3.关闭swap

类似ElasticSearch集群,在安装K8S集群时,Linux的Swap内存交换机制是一定要关闭的,否则会因为内存交换而影响性能以及稳定性。这里,我们可以提前进行设置,由于系统资源不够忽略了swap,安装完kubelet后配置kubelet

[root@master ~]# rpm -ql kubelet 
/etc/kubernetes/manifests
/etc/sysconfig/kubelet
/usr/bin/kubelet
/usr/lib/systemd/system/kubelet.service
[root@master ~]# cat > /etc/sysconfig/kubelet <<EOF
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
EOF
[root@master ~]# systemctl enable kubelet #设置开机自启

关闭Swap的方法

  • 执行swapoff -a可临时关闭,但系统重启后恢复
  • 编辑/etc/fstab,注释掉包含swap的那一行即可,重启后可永久关闭,如下所示
/dev/mapper/centos_bogon-root /                       xfs     defaults        0 0
UUID=1729f47b-c14f-46d4-af71-5331fa5bb57f /boot                   xfs     defaults        0 0
#/dev/mapper/centos_bogon-swap swap                    swap    defaults        0 0

或者是如下所示:

echo "vm.swappiness = 0">> /etc/sysctl.conf 
sysctl -p
4.配置bridge转发
[root@master ~]# cat >> /etc/sysctl.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

可能遇到的错误:

[root@master]# sysctl -p
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: 没有那个文件或目录
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: 没有那个文件或目录

解决方法一:

解决办法:
[root@master ~]# modprobe br_netfilter
[root@master ~]# ls /proc/sys/net/bridge
dr-xr-xr-x 1 root root 0 5月  26 17:35 .
dr-xr-xr-x 1 root root 0 5月  26 17:35 ..
-rw-r--r-- 1 root root 0 5月  27 10:17 bridge-nf-call-arptables
-rw-r--r-- 1 root root 0 5月  26 17:35 bridge-nf-call-ip6tables
-rw-r--r-- 1 root root 0 5月  26 17:35 bridge-nf-call-iptables
-rw-r--r-- 1 root root 0 5月  27 10:17 bridge-nf-filter-pppoe-tagged
-rw-r--r-- 1 root root 0 5月  27 10:17 bridge-nf-filter-vlan-tagged
-rw-r--r-- 1 root root 0 5月  27 10:17 bridge-nf-pass-vlan-input-dev

解决方法二:
直接修改bridge-nf-call-ip6tables和bridge-nf-call-iptables文件

[root@master ~]# ls /proc/sys/net/bridge/bridge-nf-call-ip6tables
1
[root@master ~]# ls /proc/sys/net/bridge/bridge-nf-call-iptables
1
5.禁用SELinux
[root@master ~]# sed -i s/enforcing/disabled/g /etc/selinux/config

三.配置docker(master & node)

1.启动docker
[root@master ~]# systemctl start docker
[root@master ~]# systemctl enable docker #设置开机自启
2.查看docker驱动
[root@master ~]# docker info
...
Cgroup Driver: cgroupfs 
...

docker服务的Cgroup Driver默认值为cgroupfs,当我们用kubeadm初始化的时候会出现如下警告:

[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
  • 原因是docker的Cgroup Driver和kubelet的Cgroup Driver不一致。
  • 有两种方式解决问题,一种是修改docker,,另一种是修改kubelet;

方法一:

1、修改docker的Cgroup Driver

修改/etc/docker/daemon.json文件

cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
重启docker
systemctl daemon-reload
systemctl restart docker

方法二:

2.修改kubelet的Cgroup Driver

修改/etc/systemd/system/kubelet.service.d/10-kubeadm.conf文件,增加--cgroup-driver=cgroupfs

Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --cgroup-driver=cgroupfs"
重启kubelet
systemctl daemon-reload
systemctl restart kubelet
3.下载所需镜像
master节点所需镜像

master.sh

#!/bin/bash
K8S_VERSION=v1.14.2
ETCD_VERSION=3.3.10
DNS_VERSION=1.3.1
PAUSE_VERSION=3.1
FLANNEL_VERSION=v0.11.0-amd64
# 基本组件
docker pull ydtong/kube-apiserver:$K8S_VERSION
docker pull ydtong/kube-controller-manager:$K8S_VERSION
docker pull ydtong/kube-scheduler:$K8S_VERSION
docker pull ydtong/kube-proxy:$K8S_VERSION
docker pull ydtong/etcd:$ETCD_VERSION
docker pull ydtong/pause:$PAUSE_VERSION
docker pull ydtong/coredns:$DNS_VERSION
docker pull ydtong/flannel:$FLANNEL_VERSION
# 修改tag
docker tag ydtong/kube-apiserver:$K8S_VERSION k8s.gcr.io/kube-apiserver:$K8S_VERSION
docker tag ydtong/kube-controller-manager:$K8S_VERSION k8s.gcr.io/kube-controller-manager:$K8S_VERSION
docker tag ydtong/kube-scheduler:$K8S_VERSION k8s.gcr.io/kube-scheduler:$K8S_VERSION
docker tag ydtong/kube-proxy:$K8S_VERSION k8s.gcr.io/kube-proxy:$K8S_VERSION
docker tag ydtong/etcd:$ETCD_VERSION k8s.gcr.io/etcd:$ETCD_VERSION
docker tag ydtong/pause:$PAUSE_VERSION k8s.gcr.io/pause:$PAUSE_VERSION
docker tag ydtong/coredns:$DNS_VERSION k8s.gcr.io/coredns:$DNS_VERSION
docker tag ydtong/flannel:$FLANNEL_VERSION quay.io/coreos/flannel:$FLANNEL_VERSION
# 删除国内镜像
docker rmi ydtong/kube-apiserver:$K8S_VERSION
docker rmi ydtong/kube-controller-manager:$K8S_VERSION
docker rmi ydtong/kube-scheduler:$K8S_VERSION
docker rmi ydtong/kube-proxy:$K8S_VERSION
docker rmi ydtong/etcd:$ETCD_VERSION
docker rmi ydtong/pause:$PAUSE_VERSION
docker rmi ydtong/coredns:$DNS_VERSION
docker rmi ydtong/flannel:$FLANNEL_VERSION
node节点

node.sh

#!/bin/bash
K8S_VERSION=v1.14.2
DNS_VERSION=1.3.1
PAUSE_VERSION=3.1
FLANNEL_VERSION=v0.11.0-amd64
# 基本组件
docker pull ydtong/kube-proxy:$K8S_VERSION
docker pull ydtong/pause:$PAUSE_VERSION
docker pull ydtong/coredns:$DNS_VERSION
docker pull ydtong/flannel:$FLANNEL_VERSION
# 修改tag
docker tag ydtong/kube-proxy:$K8S_VERSION k8s.gcr.io/kube-proxy:$K8S_VERSION
docker tag ydtong/pause:$PAUSE_VERSION k8s.gcr.io/pause:$PAUSE_VERSION
docker tag ydtong/coredns:$DNS_VERSION k8s.gcr.io/coredns:$DNS_VERSION
docker tag ydtong/flannel:$FLANNEL_VERSION quay.io/coreos/flannel:$FLANNEL_VERSION
# 删除国内镜像
docker rmi ydtong/kube-proxy:$K8S_VERSION
docker rmi ydtong/pause:$PAUSE_VERSION
docker rmi ydtong/coredns:$DNS_VERSION
docker rmi ydtong/flannel:$FLANNEL_VERSION

四.kubeadm初始化集群

1.master节点初始化:kubeadm init
kubeadm init --kubernetes-version=v1.14.2 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.10.149 --ignore-preflight-errors=Swap

含义:

  • 因为我们选择flannel作为Pod网络插件,所以上面的命令指定–pod-network-cidr=10.244.0.0/16,表示集群将使用Calico网络,这里需要提前指定Calico的子网范围
  • 选项–kubernetes-version=v1.14.2指定K8S版本,这里必须与之前导入到Docker镜像版本v1.14.2一致,否则会访问谷歌去重新下载K8S最新版的Docker镜像
  • 然后是 apiserver 的通信地址,选项–apiserver-advertise-address表示绑定的master节点IP,这里就是我们 master 节点的 IP 地址
  • 如果出现running with swap on is not supported. Please disable swap之类的错误,则我们还需要增加一个参数–ignore-preflight-errors=Swap来忽略 swap 的错误提示信息
  • 若执行kubeadm init出错或强制终止,则再需要执行该命令时,需要先执行kubeadm reset重置
[root@master ~]# kubeadm init --kubernetes-version=v1.14.2 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.10.149 --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.14.2
[preflight] Running pre-flight checks
	[WARNING Swap]: running with swap on is not supported. Please disable swap
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.10.48 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.10.48 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.10.48]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 18.006953 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: e7c4y8.49l41qmlyxqpzebm
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.48:6443 --token e7c4y8.49l41qmlyxqpzebm \
    --discovery-token-ca-cert-hash sha256:78b86fcae0bb7101365495507c38f260b2729f012b17e6f577272164bc3781bf  

可以看到,提示集群成功初始化,并且我们需要执行以下命令:

[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@master ~]# ss -tln | grep 6443 #查看6443端口是否开启
LISTEN     0      128         :::6443                    :::* 

要注意将上面的加入集群的命令保存下面,如果忘记保存上面的 token 和 sha256 值的话也不用担心,我们可以使用下面的命令来查找:

[root@master ~]# kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
379ke4.yrdsbe9tkljxw3y5   5h        2019-12-07T15:18:18+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

要查看 CA 证书的 sha256 的值的话,我们可以使用openssl来读取证书获取 sha256 的值:

[root@master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
832123aacaddabc465ce4ed33e06583c5fb4a97f137520c2f1f90583f2068e90

或者直接使用如下命令:

[root@master ~]# kubeadm token create --print-join-command
kubeadm join 192.168.10.152:6443 --token ckeygk.of4zffcvirf5ezex     --discovery-token-ca-cert-hash sha256:832123aacaddabc465ce4ed33e06583c5fb4a97f137520c2f1f90583f2068e90 

如果你的集群安装过程中遇到了其他问题,我们可以使用下面的命令来进行重置:

$ kubeadm reset
$ ifconfig cni0 down && ip link delete cni0
$ ifconfig flannel.1 down && ip link delete flannel.1
$ rm -rf /var/lib/cni/
2.node节点加入集群

由于我没有关闭swap,所以在最后面加入--ignore-preflight-errors=Swap,忽略Swap警告

kubeadm join 192.168.10.152:6443 --token ckeygk.of4zffcvirf5ezex     --discovery-token-ca-cert-hash sha256:832123aacaddabc465ce4ed33e06583c5fb4a97f137520c2f1f90583f2068e90 --ignore-preflight-errors=Swap
  • 如果出现下面的错误信息:[discovery] Failed to request cluster info, will try again: [Get https://10.151.30.27:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate has expired or is not yet valid],应该是 master 和 node 之间时间不同步造成的,执行 ntp 时间同步后,重新 init、join 即可。

  • 如果 join 的时候出现下面的错误信息:[ERROR CRI]: unable to check if the container runtime at "/var/run/dockershim.sock" is running: fork/exec /usr/bin/crictl -r /var/run/dockershim.sock info: no such file or directory,这个是因为 cri-tools 版本造成的错误,可以卸载掉即可:yum remove cri-tools。

该节点加入到集群后,然后我们把 master 节点的~/.kube/config文件拷贝到当前节点对应的位置即可使用 kubectl 命令行工具了。

scp -P 22 -r .kube/config root@192.168.10.134:/root
3.查看节点状态
[root@master ~]#  kubectl get nodes
NAME     STATUS     ROLES    AGE   VERSION
master   NotReady   master   81s   v1.14.2
node1    NotReady   <none>   36s   v1.14.2
4.安装 Pod Network

此时此时节点状态为NotReady,需要配置flannel组件

[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.extensions/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
  • 另外需要注意的是如果你的节点有多个网卡的话,需要在 kube-flannel.yml 中使用–iface参数指定集群主机内网网卡的名称,否则可能会出现 dns 无法解析。flanneld 启动参数加上–iface=
args:
- --ip-masq
- --kube-subnet-mgr
- --iface=eth0
再次查看节点状态,发现是Ready状态即可
[root@master ~]#  kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   92s   v1.14.2
node1    Ready    <none>   47s   v1.14.2
查看所有命名空间的容器,都是Runnning状态即可
[root@master ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
kube-system   coredns-fb8b8dccf-nwjjs          1/1     Running   1          2d18h
kube-system   coredns-fb8b8dccf-vjzjt          1/1     Running   1          2d18h
kube-system   etcd-master                      1/1     Running   1          2d18h
kube-system   kube-apiserver-master            1/1     Running   1          2d18h
kube-system   kube-controller-manager-master   1/1     Running   1          2d18h
kube-system   kube-flannel-ds-amd64-4p7df      1/1     Running   1          2d18h
kube-system   kube-flannel-ds-amd64-zzzvl      1/1     Running   2          2d18h
kube-system   kube-proxy-26mhw                 1/1     Running   1          2d18h
kube-system   kube-proxy-qg9pg                 1/1     Running   1          2d18h
kube-system   kube-scheduler-master            1/1     Running   1          2d18h

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值