k8s 安装_kubeadm--快速安装k8s集群(v1.17.3)

c259c436cb132b45f83590898541507d.png

准备工作

  1. 服务器两台(ubuntu16.04):
  • hostnamectl设置主机名:
sudo hostnamectl set-hostname node-1   
  • 配置dns
cat > /etc/resolv.conf <<EOF
nameserver 8.8.8.8
nameserver 114.114.114.114
EOF
  • 禁掉swap分区
sudo swapoff -a
永久禁掉swap分区-打开/etc/fstab文件注释掉swap那一行)
  • 网络分配:
node-1:192.168.0.6(master)
node-2:192.168.0.7(node)         

2. Docker安装(19.03.4版本):

#Install Docker CE
##Set up the repository:
###Install packages to allow apt to use a repository over HTTPS
apt-get update && apt-get install -y 
apt-transport-https ca-certificates curl software-properties-common gnupg2

###Add Docker’s official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -

###Add Docker apt repository.
add-apt-repository 
  "deb [arch=amd64] https://download.docker.com/linux/ubuntu 
  $(lsb_release -cs) 
  stable"

##Install Docker CE.
apt-get update && apt-get install -y 
  containerd.io 
  docker-ce=5:19.03.4~3-0~ubuntu-$(lsb_release -cs) 
  docker-ce-cli=5:19.03.4~3-0~ubuntu-$(lsb_release -cs)

#Setup daemon.
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

#Restart docker.
systemctl daemon-reload
systemctl restart docker

3. kubeadm安装

  • 更新源,设置国内下载镜像
apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF  
apt-get update
  • 安装kubelet kubeadm kubectl

1)、安装最新版本

apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

2)、指定版本安装

## 找到可用的版本
apt-cache madison kubeadm

## 指定版本
K_VER="1.17.3-00"
apt-get install -y kubelet=${K_VER}
apt-get install -y kubectl=${K_VER}
apt-get install -y kubeadm=${K_VER}

Master节点安装

1. 下载镜像

由于国内没办法访问Google的镜像源,故先从其他镜像源下载,然后修改tag,创建脚本:vim kubeimages.sh

#!/bin/bash

images=(k8s.gcr.io/kube-apiserver:v1.17.3
        k8s.gcr.io/kube-controller-manager:v1.17.3
        k8s.gcr.io/kube-scheduler:v1.17.3
        k8s.gcr.io/kube-proxy:v1.17.3
        k8s.gcr.io/pause:3.1
        k8s.gcr.io/etcd:3.4.3-0
        k8s.gcr.io/coredns:1.6.5)

for var in ${images[@]};do
        image=${var/k8s.gcr.io//gcr.azk8s.cn/google-containers/}
        docker pull ${image}
        docker tag ${image} ${var}
done

docker pull coredns/coredns:1.6.5
docker tag coredns/coredns:1.6.5 k8s.gcr.io/coredns:1.6.5

2. 检查镜像

root@bogon:/opt/k8s# docker images
REPOSITORY                                               TAG                 IMAGE ID            CREATED             SIZE
gcr.azk8s.cn/google-containers/kube-proxy                v1.17.3             ae853e93800d        4 weeks ago         116MB
k8s.gcr.io/kube-proxy                                    v1.17.3             ae853e93800d        4 weeks ago         116MB
gcr.azk8s.cn/google-containers/kube-apiserver            v1.17.3             90d27391b780        4 weeks ago         171MB
k8s.gcr.io/kube-apiserver                                v1.17.3             90d27391b780        4 weeks ago         171MB
k8s.gcr.io/kube-controller-manager                       v1.17.3             b0f1517c1f4b        4 weeks ago         161MB
gcr.azk8s.cn/google-containers/kube-controller-manager   v1.17.3             b0f1517c1f4b        4 weeks ago         161MB
gcr.azk8s.cn/google-containers/kube-scheduler            v1.17.3             d109c0821a2b        4 weeks ago         94.4MB
k8s.gcr.io/kube-scheduler                                v1.17.3             d109c0821a2b        4 weeks ago         94.4MB
coredns/coredns                                          1.6.5               70f311871ae1        4 months ago        41.6MB
gcr.azk8s.cn/google-containers/coredns                   1.6.5               70f311871ae1        4 months ago        41.6MB
k8s.gcr.io/coredns                                       1.6.5               70f311871ae1        4 months ago        41.6MB
gcr.azk8s.cn/google-containers/etcd                      3.4.3-0             303ce5db0e90        4 months ago        288MB
k8s.gcr.io/etcd                                          3.4.3-0             303ce5db0e90        4 months ago        288MB
gcr.azk8s.cn/google-containers/pause                     3.1                 da86e6ba6ca1        2 years ago         742kB
k8s.gcr.io/pause                                         3.1                 da86e6ba6ca1        2 years ago         742kB

3. 初始化Master节点(kubeadm init 命令)

  • --kubernetes-version 指定Kubernetes版本
  • --apiserver-advertise-address 指定apiserver的监听地址
  • --pod-network-cidr 10.244.0.0/16 指定使用flanneld网络
  • --apiserver-bind-port api-server 6443的端口
  • --ignore-preflight-errors all 跳过之前已安装部分(出问题时,问题解决后加上继续运行)
root@bogon:/opt/k8s# kubeadm init --apiserver-advertise-address 192.168.5.6 --apiserver-bind-port 6443 --kubernetes-version v1.17.3 --pod-networ                                              k-cidr 10.244.0.0/16
W0313 00:01:24.499860    4437 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0313 00:01:24.499977    4437 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.3
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the                                               guide at https://kubernetes.io/docs/setup/cri/
        [WARNING Hostname]: hostname "node-1" could not be reached
        [WARNING Hostname]: hostname "node-1": lookup node-1 on 8.8.8.8:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [node-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.5.6]     
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [node-1 localhost] and IPs [192.168.5.6 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [node-1 localhost] and IPs [192.168.5.6 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0313 00:01:29.074379    4437 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0313 00:01:29.075733    4437 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 19.507770 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node node-1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node node-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: xt6xye.9bktuznpzf5drw9f
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.5.6:6443 --token xt6xye.9bktuznpzf5drw9f 
    --discovery-token-ca-cert-hash sha256:c439a0f1aba275ad1c350b3e52a3de0578aa4b0863a3c05c9b6404f04e4e4309

运行完后,看到如上图,说明Master节点已经初始化成功了,按照提示配置 kubectl,可以切换到非root用户,这里我切到了ubuntu用户:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • 查看集群状态,确认组件都处于healthy状态
ubuntu@node-1:~$ kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}
  • 查看k8s 状态

可以看到节点处于NotReady状态,dns的两个pod也没能正常运行,那是因为我们还没安装网络插件--flannel

ubuntu@node-1:~$ kubectl get nodes
NAME     STATUS     ROLES    AGE   VERSION
node-1   NotReady   master   20m   v1.17.3

ubuntu@node-1:~$ kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE   IP            NODE     NOMINATED NODE   READINESS GATES
kube-system   coredns-6955765f44-dm4vr         0/1     Pending   0          18m   <none>        <none>   <none>           <none>
kube-system   coredns-6955765f44-vzct2         0/1     Pending   0          18m   <none>        <none>   <none>           <none>
kube-system   etcd-node-1                      1/1     Running   0          18m   192.168.5.6   node-1   <none>           <none>
kube-system   kube-apiserver-node-1            1/1     Running   0          18m   192.168.5.6   node-1   <none>           <none>
kube-system   kube-controller-manager-node-1   1/1     Running   0          18m   192.168.5.6   node-1   <none>           <none>
kube-system   kube-proxy-vkk8z                 1/1     Running   0          18m   192.168.5.6   node-1   <none>           <none>
kube-system   kube-scheduler-node-1            1/1     Running   0          18m   192.168.5.6   node-1   <none>           <none>
  • 安装flannel
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml 

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
kubectl apply -f kube-flannel-rbac.yml
  • 再次检查k8s状态

Master并不能马上变成Ready状态,稍等几分钟,就可以看到所有状态都正常了。

ubuntu@node-1:/opt/k8s$ kubectl get node
NAME     STATUS   ROLES    AGE   VERSION
node-1   Ready    master   43m   v1.17.3

ubuntu@node-1:/opt/k8s$ kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE     IP            NODE     NOMINATED NODE   READINESS GATES
kube-system   coredns-6955765f44-dm4vr         1/1     Running   0          41m     10.244.0.2    node-1   <none>           <none>
kube-system   coredns-6955765f44-vzct2         1/1     Running   0          41m     10.244.0.3    node-1   <none>           <none>
kube-system   etcd-node-1                      1/1     Running   0          41m     192.168.5.6   node-1   <none>           <none>
kube-system   kube-apiserver-node-1            1/1     Running   0          41m     192.168.5.6   node-1   <none>           <none>
kube-system   kube-controller-manager-node-1   1/1     Running   0          41m     192.168.5.6   node-1   <none>           <none>
kube-system   kube-flannel-ds-amd64-rbt75      1/1     Running   0          2m45s   192.168.5.6   node-1   <none>           <none>
kube-system   kube-proxy-vkk8z                 1/1     Running   0          41m     192.168.5.6   node-1   <none>           <none>
kube-system   kube-scheduler-node-1            1/1     Running   0          41m     192.168.5.6   node-1   <none>           <none>8s的master节点初始化完毕

k8s master初始化完成,下一步进行node节点加载。

node工作节点加载

  1. 下载基础镜像

node工作节点参照Master节点准备工作安装好 kubectl、kubeadm、kubelet, 并下载镜像(node节点只需要下载 http://k8s.gcr.io/pause:3.1和 http://k8s.gcr.io/kube-proxy:v1.17.3 即可)。

#!/bin/bash

images=(k8s.gcr.io/kube-proxy:v1.17.3
        k8s.gcr.io/pause:3.1)

for var in ${images[@]};do
        image=${var/k8s.gcr.io//gcr.azk8s.cn/google-containers/}
        docker pull ${image}
        docker tag ${image} ${var}
done
  • 查看镜像
root@node-2:~# docker images
REPOSITORY                                  TAG                 IMAGE ID            CREATED             SIZE
quay.io/coreos/flannel                      v0.12.0-amd64       4e9f801d2217        10 hours ago        52.8MB
gcr.azk8s.cn/google-containers/kube-proxy   v1.17.3             ae853e93800d        4 weeks ago         116MB
k8s.gcr.io/kube-proxy                       v1.17.3             ae853e93800d        4 weeks ago         116MB
gcr.azk8s.cn/google-containers/pause        3.1                 da86e6ba6ca1        2 years ago         742kB
k8s.gcr.io/pause                            3.1                 da86e6ba6ca1        2 years ago         742kB

2. 加入集群(根据master安装结束后的提示)

root@node-2:~# kubeadm join 192.168.5.6:6443 --token xt6xye.9bktuznpzf5drw9f     --discovery-token-ca-cert-hash sha256:c439a0f1aba275ad1c350b3e52a3de0578aa4b0863a3c05c9b6404f04e4e4309
W0313 01:08:58.247189    7378 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/        [WARNING Hostname]: hostname "node-2" could not be reached
        [WARNING Hostname]: hostname "node-2": lookup node-2 on 8.8.8.8:53: no such host
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
  • 查看集群状态,可看到node-2节点已加入集群中
root@node-1:/opt/k8s# kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
node-1   Ready    master   84m   v1.17.3
node-2   Ready    <none>   16m   v1.17.3
  • 验证集群

到master节点执行验证操作,可看到node2节点上的flanneld和kube-proxy也已都安装

root@node-1:/opt/k8s# kubectl get nodes
NAME     STATUS   ROLES    AGE    VERSION
node-1   Ready    master   156m   v1.17.3
node-2   Ready    <none>   89m    v1.17.3

root@node-1:/opt/k8s# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE    IP            NODE     NOMINATED NODE   READINESS GATES
kube-system   coredns-6955765f44-dm4vr         1/1     Running   0          156m   10.244.0.2    node-1   <none>           <none>
kube-system   coredns-6955765f44-vzct2         1/1     Running   0          156m   10.244.0.3    node-1   <none>           <none>
kube-system   etcd-node-1                      1/1     Running   0          156m   192.168.5.6   node-1   <none>           <none>
kube-system   kube-apiserver-node-1            1/1     Running   0          156m   192.168.5.6   node-1   <none>           <none>
kube-system   kube-controller-manager-node-1   1/1     Running   0          156m   192.168.5.6   node-1   <none>           <none>
kube-system   kube-flannel-ds-amd64-6phdv      1/1     Running   0          89m    192.168.5.7   node-2   <none>           <none>
kube-system   kube-flannel-ds-amd64-rbt75      1/1     Running   0          117m   192.168.5.6   node-1   <none>           <none>
kube-system   kube-proxy-vkk8z                 1/1     Running   0          156m   192.168.5.6   node-1   <none>           <none>
kube-system   kube-proxy-zqfkl                 1/1     Running   0          89m    192.168.5.7   node-2   <none>           <none>
kube-system   kube-scheduler-node-1            1/1     Running   0          156m   192.168.5.6   node-1   <none>           <none>

至此,集群安装完毕

  • 移除节点
#重置节点(node节点执行)
kubeadm reset
#删除节点,删除后 数据就从etcd中清除了(可运行kubectl的任一节点中执行)
kubectl delete node node-1
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值