Kubeadm安装Kubernetes集群(一)

Kubeadm安装Kubernetes集群(一)

目录

Kubeadm安装Kubernetes集群(一)

一、环境准备

二、安装 kubeadm 和相关工具

三、下载相关镜像

四、初始化kubernates

五、安装网络插件(Flannel)

六、问题处理


一、环境准备

1、操作系统

CentOS 7.2

2、关闭swap

[root@localhost ~]# swapoff -a

## vim /etc/fstab 注释如下语句,注意:centos-swap 可能不一样,如:rhel-swap
/dev/mapper/centos-swap swap                    swap    defaults        0 0

## 验证

[root@localhost ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:           8454         888        5225          20        2339        7214
Swap:             0           0           0

3、关闭selinux

[root@localhost ~]# setenforce 0

[root@localhost ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

4、切换kubernetes yum源

[root@localhost ~]# vim /etc/yum.repos.d/kubernetes.repo
# 写入以下内容
[kuberneten]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

缓存生效

yum makecache

5、修改主机名

hostname k8s-master

二、安装 kubeadm 和相关工具

# 安装最新版本
yum install -y docker kubelet kubeadm kubectl kubernetes-cni

# 指定版本
yum install -y docker kubelet-1.18.6-0.x86_64 kubeadm-1.18.6-0.x86_64 kubectl-1.18.6-0.x86_64 kubernetes-cni

查看安装的版本

[root@localhost ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:58:53Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

 

三、下载相关镜像

1、查看需要用到的镜像

# 查看 kubeadm 会用到的镜像(以 1.18.6 为例)
[root@localhost ~]# kubeadm config images list
W0728 00:58:28.801372   17386 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.18.6
k8s.gcr.io/kube-controller-manager:v1.18.6
k8s.gcr.io/kube-scheduler:v1.18.6
k8s.gcr.io/kube-proxy:v1.18.6
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7

2、下载镜像

[root@localhost ~]# docker pull k8s.gcr.io/kube-apiserver:v1.18.6
...

3、k8s.gcr.io下载慢处理

[root@localhost ~]# docker pull mirrorgcrio/kube-apiserver:v1.18.6
[root@localhost ~]# docker tag mirrorgcrio/kube-apiserver:v1.18.6 k8s.gcr.io/kube-apiserver:v1.18.6

[root@localhost ~]# docker pull mirrorgcrio/kube-controller-manager:v1.18.6
[root@localhost ~]# docker tag mirrorgcrio/kube-controller-manager:v1.18.6 k8s.gcr.io/kube-controller-manager:v1.18.6

[root@localhost ~]# docker pull mirrorgcrio/kube-scheduler:v1.18.6
[root@localhost ~]# docker tag mirrorgcrio/kube-scheduler:v1.18.6 k8s.gcr.io/kube-scheduler:v1.18.6

[root@localhost ~]# docker pull mirrorgcrio/kube-proxy:v1.18.6
[root@localhost ~]# docker tag mirrorgcrio/kube-proxy:v1.18.6 k8s.gcr.io/kube-proxy:v1.18.6


[root@localhost ~]# docker pull docker.io/codedingan/pause:3.2
[root@localhost ~]# docker tag docker.io/codedingan/pause:3.2 k8s.gcr.io/pause:3.2

[root@localhost ~]# docker pull docker.io/codedingan/etcd:3.4.3-0
[root@localhost ~]# docker tag docker.io/codedingan/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0

[root@localhost ~]# docker pull docker.io/codedingan/coredns:1.6.7
[root@localhost ~]# docker tag docker.io/codedingan/coredns:1.6.7 k8s.gcr.io/coredns:1.6.7

四、初始化kubernates

kubeadm init  --apiserver-advertise-address=172.19.12.169  \
--kubernetes-version=v1.18.6  \
--service-cidr=10.1.0.0/16  \
--pod-network-cidr=10.244.0.0/16

等待初始化

[init] Using Kubernetes version: v1.18.6
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 172.19.12.169]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [172.19.12.169 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [172.19.12.169 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0728 02:16:33.160586   23379 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0728 02:16:33.161692   23379 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

执行下面的命令进行配置

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

将桥接的IPv4流量传递到iptables的链【有一些ipv4的流量不能走iptables链【linux内核的一个过滤器,每个流量都会经过他,然后再匹配是否可进入当前应用进程去处理】,导致流量丢失】:配置k8s.conf文件(#k8s.conf文件原来不存在,需要自己创建的)

cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

五、安装网络插件(Flannel)

执行 kubectl get node 命令,发现 master 节点的状态为 NotReady。我们需要安装 cni 网络插件。kube-flannl.yml参考:https://github.com/coreos/flannel/blob/master/Documentation/kubernetes.md

下载kube-flannl.yml

[root@k8s-master kubernetes]# kubectl apply -f ./kube-flannl.yml 
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel configured
clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg configured
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

验证

[root@k8s-master kubernetes]#  kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   13m   v1.18.6

六、问题处理

问题1:

[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1

处理:

[root@localhost ~]# cat /proc/sys/net/bridge/bridge-nf-call-iptables
0
# 解决办法
[root@localhost ~]# echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
# 验证
[root@localhost ~]# cat /proc/sys/net/bridge/bridge-nf-call-iptables
1

 

问题2:

[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

处理:

[root@localhost ~]# swapoff -a
[root@localhost ~]# 

## vim /etc/fstab 注释如下语句,注意:centos-swap 可能不一样,如:rhel-swap
/dev/mapper/centos-swap swap                    swap    defaults        0 0

问题3:

[root@localhost kubernetes]# kubectl apply -f /usr/local/kubernetes/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel unchanged
clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg unchanged
unable to recognize "/usr/local/kubernetes/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "/usr/local/kubernetes/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "/usr/local/kubernetes/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "/usr/local/kubernetes/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "/usr/local/kubernetes/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"

[root@localhost kubernetes]# kubectl apply -f /usr/local/kubernetes/kube-flannel.yml 
clusterrole.rbac.authorization.k8s.io/flannel unchanged
clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg unchanged
error: error validating "/usr/local/kubernetes/kube-flannel.yml": error validating data: ValidationError(DaemonSet.spec): missing required field "selector" in io.k8s.api.apps.v1.DaemonSetSpec; if you choose to ignore these errors, turn validation off with --validate=false

处理:

# 将 extensions/v1beta1 替换为 apps/v1

问题4:

# systemctl status kubelet -a
8月 17 18:02:56 k8s-master kubelet[11979]: E0817 18:02:56.177932   11979 summary_sys_containers.go:47] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"

解决办法

# 方法一:
# vim /var/lib/kubelet/kubeadm-flags.env
# 默认
KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2"

# 修改为:
KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2 --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice"

 

重启kubelet

systemctl restart kubelet

问题5:

修改端口映射范围

# vim /etc/kubernetes/manifests/kube-apiserver.yaml
# 添加 - --service-node-port-range=30000-40000
spec:
  containers:
  - command:
    - kube-apiserver
    - --service-node-port-range=30000-40000
    - --advertise-address=172.19.12.169
    ...

重启

systemctl restart kubelet

上一篇:Kubernetes学习

下一篇:Kubeadm安装Kubernetes集群(二)

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

主公不搬砖

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值