前言
其实这篇博文我是没打算写的,因为网上关于部署kubernetes一主双从的博文有很多。就在昨天因为从家到学校,导致了桥接的DHCP模式的IP发生了变化,导致了集群不能起来,所以,我想了又想还是自己也写一篇部署kubernetes1.13.3的一主双从吧,将自己仅有的一点所得写出来,帮助和我一样的道友
主机规划
主机名 | 内网IP | 主机版本 | CUP,处理器 | 部署软件 | Pod网段 | service网段 |
master | 10.0.0.100 | centos7.3 | 4G,2颗 | docker , kubeadm , kubectl , kubelet | 10.244.0.0/16 | 10.92.0.0/12 |
node01 | 10.0.0.101 | centos7.3 | 2G,2颗 | docker , kubeadm , kubelet | 10.244.0.0/16 | 10.92.0.0/12 |
node02 | 10.0.0.102 | centos7.3 | 2G,2颗 | docker , kubeadm , kubelet | 10.244.0.0/16 | 10.92.0.0/12 |
注:关闭防火墙,SElinux;时间同步
注:最好使用Nat模式,不至于因为桥接导致换个地方就出现网络问题
注:建议在后面安装网络插件时先确认一下是否docker可以拉取到,如果不能,那就应该是网的问题,那最好是用手机开热点来安装flannel插件
开始部署
配置源
获取docker源
安装docker源
[root@master ~]# cd /etc/yum.repos.d/
[root@master yum.repos.d]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
获取kubernetes源
安装kubernetes源
[root@master yum.repos.d]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF
查看可用源
[root@master yum.repos.d]# yum repolist
已加载插件:fastestmirror
base | 3.6 kB 00:00:00
docker-ce-stable | 3.5 kB 00:00:00
epel | 4.7 kB 00:00:00
extras | 3.4 kB 00:00:00
kubernetes/signature | 454 B 00:00:00
从 https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg 检索密钥
导入 GPG key 0xA7317B0F:
用户ID : "Google Cloud Packages Automatic Signing Key <gc-team@google.com>"
指纹 : d0bc 747f d8ca f711 7500 d6fa 3746 c208 a731 7b0f
来自 : https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
是否继续?[y/N]:y
从 https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg 检索密钥
kubernetes/signature | 1.4 kB 00:00:03 !!!
updates | 3.4 kB 00:00:00
(1/7): extras/7/x86_64/primary_db | 180 kB 00:00:00
(2/7): docker-ce-stable/x86_64/primary_db | 23 kB 00:00:00
(3/7): updates/7/x86_64/primary_db | 2.4 MB 00:00:01
(4/7): epel/x86_64/updateinfo | 959 kB 00:00:01
(5/7): docker-ce-stable/x86_64/updateinfo | 55 B 00:00:02
(6/7): epel/x86_64/primary_db | 6.6 MB 00:00:02
(7/7): kubernetes/primary | 44 kB 00:00:01
Determining fastest mirrors
* base: mirrors.aliyun.com
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
kubernetes 314/314
源标识 源名称 状态
base/7/x86_64 CentOS-7 - Base - mirrors.aliyun.com 10,019
docker-ce-stable/x86_64 Docker CE Stable - x86_64 33
epel/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 12,902
extras/7/x86_64 CentOS-7 - Extras - mirrors.aliyun.com 371
kubernetes Kubernetes 314
updates/7/x86_64 CentOS-7 - Updates - mirrors.aliyun.com 1,098
repolist: 24,737
IP解析
[root@master yum.repos.d]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.0.100 master
10.0.0.101 node01
10.0.0.102 node02
[root@master yum.repos.d]# scp /etc/hosts 10.0.0.101:/etc/ 100% 213 0.2KB/s 00:00
[root@master yum.repos.d]# scp /etc/hosts 10.0.0.102:/etc/
将源拷贝到node01和node02节点
[root@master yum.repos.d]# scp kubernetes.repo docker-ce.repo node01:/etc/yum.repos.d/ 100% 2640 2.6KB/s 00:00
[root@master yum.repos.d]# scp kubernetes.repo docker-ce.repo node02:/etc/yum.repos.d/
安装docker ,kubelet,kubeadm,kubectl(最常用的kubernetes客户端命令行工具)
[root@master yum.repos.d]# yum install -y docker-ce kubelet kubeadm kubectl
依赖关系解决
=============================================================================================================================
Package 架构 版本 源 大小
=============================================================================================================================
正在安装:
docker-ce x86_64 3:18.09.2-3.el7 docker-ce-stable 19 M
kubeadm x86_64 1.13.3-0 kubernetes 7.9 M
kubectl x86_64 1.13.3-0 kubernetes 8.5 M
kubelet x86_64 1.13.3-0 kubernetes 21 M
为依赖而安装:
conntrack-tools x86_64 1.4.4-4.el7 base 186 k
containerd.io x86_64 1.2.2-3.3.el7 docker-ce-stable 22 M
cri-tools x86_64 1.12.0-0 kubernetes 4.2 M
docker-ce-cli x86_64 1:18.09.2-3.el7 docker-ce-stable 14 M
kubernetes-cni x86_64 0.6.0-0 kubernetes 8.6 M
libnetfilter_cthelper x86_64 1.0.0-9.el7 base 18 k
libnetfilter_cttimeout x86_64 1.0.0-6.el7 base 18 k
libnetfilter_queue x86_64 1.0.2-2.el7_2 base 23 k
socat x86_64 1.7.3.2-2.el7 base 290 k
为依赖而更新:
libnetfilter_conntrack x86_64 1.0.6-1.el7_3 base 55 k
启动docker,并设置为开机自启动,并添加镜像加速器
[root@master yum.repos.d]# systemctl start docker
[root@master yum.repos.d]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@master yum.repos.d]# vim /etc/docker/daemon.json
{
"registry-mirrors": ["https://registry.docker-cn.com"]
}
[root@master yum.repos.d]# systemctl restart docker
禁用swap
[root@master yum.repos.d]# vim /etc/sysconfig/kubelet
[root@master yum.repos.d]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
[root@master yum.repos.d]# swapoff -a
[root@master yum.repos.d]# vim /etc/fstab
[root@master yum.repos.d]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Thu Dec 27 10:50:57 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=553f6a3b-33e6-4add-89d6-8f1c22ece937 / xfs defaults 0 0
UUID=df05a2d6-6345-4c1c-b6d7-e9f019488df4 /boot xfs defaults 0 0
#UUID=dfb503e0-15ee-4a75-90f5-488dff604d76 swap swap defaults 0 0
[root@master yum.repos.d]# free -m
total used free shared buff/cache available
Mem: 3935 152 2984 8 799 3503
Swap: 0 0 0
kubeadm初始化集群
[root@master yum.repos.d]# kubeadm init --kubernetes-version=v1.13.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.13.3
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.2. Latest validated version: 18.06
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
上面显示出现了两处错误,提示很是明确,对应修改即可
[root@master yum.repos.d]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
[root@master yum.repos.d]# kubeadm init --kubernetes-version=v1.13.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.13.3
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.2. Latest validated version: 18.06
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
[root@master yum.repos.d]# echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
[root@master yum.repos.d]# kubeadm init --kubernetes-version=v1.13.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.13.3
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.2. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.13.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.13.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.13.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.13.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.2.24: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.2.6: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
注:上面显示了一大串报错,仔细看一下,发现都是镜像拉取失败了,这是早就料想到的情况,原因不可描述,问题不大,手动拉取即可!
拉取镜像
[root@master yum.repos.d]# docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.13.3
[root@master yum.repos.d]# docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.13.3
[root@master yum.repos.d]# docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.13.3
[root@master yum.repos.d]# docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.13.3
[root@master yum.repos.d]# docker pull mirrorgooglecontainers/pause:3.1
[root@master yum.repos.d]# docker pull mirrorgooglecontainers/etcd-amd64:3.2.24
[root@master yum.repos.d]# docker pull coredns/coredns:1.2.6
修改镜像标签
[root@master yum.repos.d]# docker tag docker.io/mirrorgooglecontainers/kube-apiserver-amd64:v1.13.3 k8s.gcr.io/kube-apiserver:v1.13.3
[root@master yum.repos.d]# docker tag docker.io/mirrorgooglecontainers/kube-controller-manager-amd64:v1.13.3 k8s.gcr.io/kube-controller-manager:v1.13.3
[root@master yum.repos.d]# docker tag docker.io/mirrorgooglecontainers/kube-scheduler-amd64:v1.13.3 k8s.gcr.io/kube-scheduler:v1.13.3
[root@master yum.repos.d]# docker tag docker.io/mirrorgooglecontainers/kube-proxy-amd64:v1.13.3 k8s.gcr.io/kube-proxy:v1.13.3
[root@master yum.repos.d]# docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
[root@master yum.repos.d]# docker tag docker.io/mirrorgooglecontainers/etcd-amd64:3.2.24 k8s.gcr.io/etcd:3.2.24
[root@master yum.repos.d]# docker tag docker.io/coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6
重新初始化
[root@master yum.repos.d]# kubeadm init --kubernetes-version=v1.13.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.13.3
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.2. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [20.0.0.100 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [20.0.0.100 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 20.0.0.100]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 30.503516 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: xg8042.88iqbvhu6duxuo41
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 20.0.0.100:6443 --token xg8042.88iqbvhu6duxuo41 --discovery-token-ca-cert-hash sha256:2087793bdf6d9bd0972cc255dc0452fc1e517c907f96bcca67d82b25f5222992
注:可以看到生成了一大推的证书
如果是普通用户部署,要使用kubectl,需要配置kubectl的环境变量,这些命令也是kubeadm init输出的一部分:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
这里我使用的是root身份部署,可以使用export进行环境变量
[root@master yum.repos.d]# export KUBECONFIG=/etc/kubernetes/admin.conf
查看集群的健康状态
[root@master yum.repos.d]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
[root@master yum.repos.d]# kubectl get node
NAME STATUS ROLES AGE VERSION
master NotReady master 13m v1.13.3
安装网络插件
[root@master yum.repos.d]# cat /proc/sys/net/bridge/bridge-nf-call-iptables
1
[root@master yum.repos.d]# cat /proc/sys/net/bridge/bridge-nf-call-ip6tables
1
[root@master yum.repos.d]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds created
节点加入集群【以node01为例】
[root@node01 ~]# yum repolist
[root@node01 ~]# yum install -y docker kubeadm kubelet
[root@node01 ~]# systemctl start docker
[root@node01 ~]# systemctl enable docker
[root@master yum.repos.d]# scp /etc/docker/daemon.json node01:/etc/docker/
root@node01's password:
daemon.json
[root@node01 ~]# systemctl restart docker
[root@master yum.repos.d]# scp /etc/fstab node01:/etc/
root@node01's password:
fstab
[root@node01 ~]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
[root@node01 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
[root@node01 ~]# echo 1 > /proc/sys/net/ipv4/ip_forward
[root@master yum.repos.d]# scp /etc/sysconfig/kubelet node01:/etc/sysconfig/
root@node01's password:
kubelet
[root@node01 ~]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
[root@node01 ~]# swapoff -a
[root@node01 ~]# free -m
total used free shared buff/cache available
Mem: 1984 120 1213 8 650 1676
Swap: 0 0 0
[root@node01 ~]# docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.13.3
[root@node01 ~]# docker pull mirrorgooglecontainers/pause:3.1
[root@node01 ~]# docker tag docker.io/mirrorgooglecontainers/kube-proxy-amd64:v1.13.3 k8s.gcr.io/kube-proxy:v1.13.3
[root@node01 ~]# docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
[root@node01 ~]# docker pull xiyangxixia/k8s-flannel:v0.10.0-amd64
[root@node01 ~]# docker tag xiyangxixia/k8s-flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
[root@node01 ~]# kubeadm join 20.0.0.100:6443 --token xg8042.88iqbvhu6duxuo41 --discovery-token-ca-cert-hash sha256:2087793bdf6d9bd0972cc255dc0452fc1e517c907f96bcca67d82b25f5222992
[root@node01 ~]# echo $?
0