centos7.6部署k8s集群(超详细)

节点配置:

节点名称

ip

master

192.168.50.10

node01

192.168.50.20

node02

192.168.50.30

1 环境介绍

操作系统:CentOS 7.6(CentOS-7-x86_64-DVD-1810)

Docker: 19.03.15

Kubernetes:1.20.0

2 准备工作

1. 关闭selinux

[root]# setenforce 0

[root]# vi /etc/selinux/config

SELINUX=disabled

2. 关闭firewalld

[root]# systemctl stop firewalld

[root]# systemctl disable firewall

3. 关闭swap

[root]# swapoff -a

[root]# vi /etc/fstab#/dev/mapper/centos-swap swap                    swap    defaults        0 0

4. 修改grub

[root]# vi /etc/default/grub

添加如下部分:

GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"

5. 配置hosts文件

[root]# vi /etc/hosts

192.168.50.10 master

192.168.50.20 node01

192.168.50.30 node02

修改成对应的主机名

3. 安装K8S

3.1 安装Docker

三个节点均安装Docker

[root]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

[root]# yum install docker-ce-19.03.15 -y

cat > /etc/docker/daemon.json << EOF

{

  "exec-opts": ["native.cgroupdriver=systemd"],

  "log-driver": "json-file",

  "log-opts": {

    "max-size": "100m"

  },

  "storage-driver": "overlay2"

}

EOF

[root]# systemctl enable docker

[root]# systemctl start docker

3.2 安装kubelet、kubeadm、kubectl

三个节点均需要安装。

cat > /etc/yum.repos.d/kubernetes.repo << EOF

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=0

repo_gpgcheck=0

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

[root]# yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0

[root]# systemctl enable kubelet

3.3 master端初始化

master端群集初始化

[root]# kubeadm init --apiserver-advertise-address=192.168.50.10 --pod-network-cidr=192.168.0.0/16 --image-repository registry.aliyuncs.com/google_containers

I0517 10:42:25.377859   10391 version.go:251] remote version is much newer: v1.24.0; falling back to: stable-1.20

[init] Using Kubernetes version: v1.20.15

[preflight] Running pre-flight checks

[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'

[preflight] Pulling images required for setting up a Kubernetes cluster

[preflight] This might take a minute or two, depending on the speed of your internet connection

[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

[certs] Using certificateDir folder "/etc/kubernetes/pki"

[certs] Generating "ca" certificate and key

[certs] Generating "apiserver" certificate and key

[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.1.73]

[certs] Generating "apiserver-kubelet-client" certificate and key

[certs] Generating "front-proxy-ca" certificate and key

[certs] Generating "front-proxy-client" certificate and key

[certs] Generating "etcd/ca" certificate and key

[certs] Generating "etcd/server" certificate and key

[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.1.73 127.0.0.1 ::1]

[certs] Generating "etcd/peer" certificate and key

[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.1.73 127.0.0.1 ::1]

[certs] Generating "etcd/healthcheck-client" certificate and key

[certs] Generating "apiserver-etcd-client" certificate and key

[certs] Generating "sa" key and public key

[kubeconfig] Using kubeconfig folder "/etc/kubernetes"

[kubeconfig] Writing "admin.conf" kubeconfig file

[kubeconfig] Writing "kubelet.conf" kubeconfig file

[kubeconfig] Writing "controller-manager.conf" kubeconfig file

[kubeconfig] Writing "scheduler.conf" kubeconfig file

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Starting the kubelet

[control-plane] Using manifest folder "/etc/kubernetes/manifests"

[control-plane] Creating static Pod manifest for "kube-apiserver"

[control-plane] Creating static Pod manifest for "kube-controller-manager"

[control-plane] Creating static Pod manifest for "kube-scheduler"

[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s

[apiclient] All control plane components are healthy after 14.016126 seconds

[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace

[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster

[upload-certs] Skipping phase. Please see --upload-certs

[mark-control-plane] Marking the node master as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"

[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

[bootstrap-token] Using token: pkhisz.whbtwtmxw05led93

[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles

[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes

[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace

[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key

[addons] Applied essential addon: CoreDNS

[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.73:6443 --token pkhisz.whbtwtmxw05led93 \

    --discovery-token-ca-cert-hash sha256:e6e087b6ea5fe65c70b99f5ad0050642b3c7072224081fc7978ce026520907da

提示:

  上面输出日志中,有提示:如何配置环境变量、如何配置网络以及如何将NODE节点加入到群集中。

警告1(可忽略)

[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

警告1解决办法

[root]# cat > /etc/docker/daemon.json << EOF

{

  "exec-opts": ["native.cgroupdriver=systemd"],

  "log-driver": "json-file",

  "log-opts": {

    "max-size": "100m"

  },

  "storage-driver": "overlay2"

}

EOF

[root]# systemctl restart docker

重置kubeadm后重新初始化

[root]# kubeadm reset

[root]# kubeadm init --apiserver-advertise-address=192.168.1.73 --pod-network-cidr=192.168.0.0/16 --image-repository registry.aliyuncs.com/google_containers

 

警告2(可忽略)

[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'

警告2解决办法:

[root]# systemctl enable kubelet

重置kubeadm后重新初始化

[root]# kubeadm reset

[root]# kubeadm init --apiserver-advertise-address=192.168.1.73 --pod-network-cidr=192.168.0.0/16 --image-repository registry.aliyuncs.com/google_containers

3.4 master端配置环境变量

配置root用户环境变量

[root]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /root/.bashrc

[root]# source /root/.bashrc

 

配置非root用户环境变量

[k8s]# mkdir -p $HOME/.kube

[k8s]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

[k8s]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

如果环境变量没有配置好,下面Master端配置网络会有如下报错:

[root]# kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml

The connection to the server localhost:8080 was refused - did you specify the right host or port?

3.5 master端配置网络

[root@master ~]# kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml

configmap/calico-config created

Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition

customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created

customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created

clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created

clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created

clusterrole.rbac.authorization.k8s.io/calico-node created

clusterrolebinding.rbac.authorization.k8s.io/calico-node created

daemonset.apps/calico-node created

serviceaccount/calico-node created

deployment.apps/calico-kube-controllers created

serviceaccount/calico-kube-controllers created

3.6 master配置kubectl命令自动补齐

查看completion帮助,根据提示配置kubectl命令配置自动补齐功能

[root]# kubectl completion -h

Output shell completion code for the specified shell (bash or zsh). The shell code must be evaluated to provide

interactive completion of kubectl commands.  This can be done by sourcing it from the .bash_profile.

 Detailed instructions on how to do this are available here:

https://kubernetes.io/docs/tasks/tools/install-kubectl/#enabling-shell-autocompletion

 Note for zsh users: [1] zsh completions are only supported in versions of zsh >= 5.2

Examples:

  # Installing bash completion on macOS using homebrew

  ## If running Bash 3.2 included with macOS

  brew install bash-completion

  ## or, if running Bash 4.1+

  brew install bash-completion@2

  ## If kubectl is installed via homebrew, this should start working immediately.

  ## If you've installed via other means, you may need add the completion to your completion directory

  kubectl completion bash > $(brew --prefix)/etc/bash_completion.d/kubectl

  

  

  # Installing bash completion on Linux

  ## If bash-completion is not installed on Linux, please install the 'bash-completion' package

  ## via your distribution's package manager.

  ## Load the kubectl completion code for bash into the current shell

  source <(kubectl completion bash)

  ## Write bash completion code to a file and source if from .bash_profile

  kubectl completion bash > ~/.kube/completion.bash.inc

  printf "

  # Kubectl shell completion

  source '$HOME/.kube/completion.bash.inc'

  " >> $HOME/.bash_profile

  source $HOME/.bash_profile

  

  # Load the kubectl completion code for zsh[1] into the current shell

  source <(kubectl completion zsh)

  # Set the kubectl completion code for zsh[1] to autoload on startup

  kubectl completion zsh > "${fpath[1]}/_kubectl"

Usage:

  kubectl completion SHELL [options]

Use "kubectl options" for a list of global command-line options (applies to all commands).

 

配置kubectl命令自动补齐功能

[root]# echo "source <(kubectl completion bash)" >> /root/.bashrc

[root]# source /root/.bashrc

3.7 master查看群集状态

master端查看群集组件状态

[root]# kubectl get componentstatuses

Warning: v1 ComponentStatus is deprecated in v1.19+

NAME                 STATUS      MESSAGE                                                                                       ERROR

controller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused   

scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   

etcd-0               Healthy     {"health":"true"}  

注释:

controller-manager状态异常:Unhealthy;

scheduler状态异常:Unhealthy。

解决办法:注释或删除kube-scheduler.yaml、kube-controller-manager.yaml文件中“- --port=0”部分

[root]# vi /etc/kubernetes/manifests/kube-scheduler.yaml

[root]# vi /etc/kubernetes/manifests/kube-controller-manager.yaml

再次查看群集组件状态

[root]# kubectl get componentstatuses

Warning: v1 ComponentStatus is deprecated in v1.19+

NAME                 STATUS    MESSAGE             ERROR

scheduler            Healthy   ok                  

controller-manager   Healthy   ok                  

etcd-0               Healthy   {"health":"true"}  

查看群集信息

[root]# kubectl cluster-info

Kubernetes control plane is running at https://192.168.1.73:6443

KubeDNS is running at https://192.168.1.73:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

查看核心组件运行状态

[root]# kubectl -n kube-system get pod

NAME                                      READY   STATUS    RESTARTS   AGE

calico-kube-controllers-bcc6f659f-rwbnl   1/1     Running   0          174m

calico-node-9cqwz                         1/1     Running   0          174m

coredns-7f89b7bc75-5948b                  1/1     Running   0          3h27m

coredns-7f89b7bc75-ss8r7                  1/1     Running   0          3h27m

etcd-master                               1/1     Running   0          3h27m

kube-apiserver-master                     1/1     Running   0          3h27m

kube-controller-manager-master            1/1     Running   0          2m47s

kube-proxy-ccmz5                          1/1     Running   0          3h27m

kube-scheduler-master                     1/1     Running   0          7m4s

3.8 将node节点加入到群集

master端创建永久token,缺省情况下创建的token有效期是24h

[root]# # kubeadm token create --ttl 0

6m4lq3.wspl9bkqpijog3ne

[root]# kubeadm token list

TOKEN                     TTL         EXPIRES   USAGES                   DESCRIPTION                                                EXTRA GROUPS

6m4lq3.wspl9bkqpijog3ne   <forever>   <never>   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token

 

从master端获取discovery-token-ca-cert-hash值

[root]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

df98f4545e43a430a1bd2903d770172880f7f0d5090625654ef0653a7e3025bd

 

node1、node2节点执行加入到群集的操作(注意格式)

[root]# kubeadm join 192.168.1.73:6443 --token 6m4lq3.wspl9bkqpijog3ne --discovery-token-ca-cert-hash sha256:df98f4545e43a430a1bd2903d770172880f7f0d5090625654ef0653a7e3025bd

[preflight] Running pre-flight checks

[preflight] Reading configuration from the cluster...

[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Starting the kubelet

[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:

* Certificate signing request was sent to apiserver and a response was received.

* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

 

查看群集节点信息

[root]# kubectl get nodes -o wide

NAME     STATUS   ROLES                  AGE     VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME

master   Ready    control-plane,master   3h56m   v1.20.0   192.168.1.73   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.15

node1    Ready    <none>                 88s     v1.20.0   192.168.1.74   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.15

node2    Ready    <none>                 8m48s   v1.20.0   192.168.1.75   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://19.3.15

至此,kubernetes群集部署完成。

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值