【四二学堂】k8s/kubernetes v1.20.1 安装,ubantu 20系统

2 篇文章 0 订阅
2 篇文章 0 订阅

docker安装包下载地址:花几个积分,还是值得的吧,哈哈哈

https://download.csdn.net/download/qq_38187437/13755761

中文文档地址

http://docs.kubernetes.org.cn/227.html#Kubernetes

三台服务器

1.服务器版本

Ubuntu 20.04 64位

2.修改主节点hostname

vi /etc/hostname
master001

3.修改主节点hosts

vi /etc/hosts
172.17.93.204   master001       master001

4.修改子节点1 hostname

vi /etc/hostname
slave001

5.修改主节点1 hosts

vi /etc/hosts
172.17.93.205   slave001        slave001

6.修改子节点2 hostname

vi /etc/hostname
slave002

7.修改主节点2 hosts

vi /etc/hosts
172.17.93.195   slave002        slave002

8.主版本必须保持⼀一致

uname -r
5.4.0-54-generic

9.安装docker

上传docker安装包,使用tar包安装

同步到其他节点

scp docker-20.10.1.tgz  172.17.93.207:/root
tar -zxvf docker-20.10.1.tgz
cd 
cp /root/docker/* /usr/local/bin

//编辑docker.service配置文件

vi /lib/systemd/system/docker.service

//配置文件

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=docker.socket
Requires=docker.socket

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=-/etc/default/docker
ExecStart=/usr/local/bin/dockerd --storage-driver=overlay -H fd:// $DOCKER_OPTS $DOCKER_OPT_BIP $DOCKER_OPT_MTU $DOCKER_OPT_IPMASQ
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
vi /lib/systemd/system/docker.socket
[Unit]
Description=Docker Socket for the API
PartOf=docker.service

[Socket]
ListenStream=/var/run/docker.sock
SocketMode=0660
SocketUser=root
SocketGroup=root

[Install]
WantedBy=sockets.target

//docker配置

vi /etc/default/docker
DOCKER_OPTS="--selinux-enabled --insecure-registry local-registry.com"
systemctl enable docker

systemctl start docker

10.编辑docker配置源

vi /etc/docker/daemon.json
{
"registry-mirrors":["https://ozcouv1b.mirror.aliyuncs.com"]
}

重启docker服务

# 重载所有修改过的配置⽂文件
sudo systemctl daemon-reload
# 重启Docker服务
sudo systemctl restart docker

# 测试
docker ps -a

11.配置并安装k8s国内源

1. 创建配置⽂文件 

sudo touch /etc/apt/sources.list.d/kubernetes.list

2. 添加写权限

sudo chmod 666 /etc/apt/sources.list.d/kubernetes.list

3.修改文件

vi /etc/apt/sources.list.d/kubernetes.list

再添加,内容如下:

deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main

4. 执⾏行行 sudo apt update 更更新操作系统源,开始会遇⻅见如下错误

sudo apt update
Get:1 http://mirrors.ustc.edu.cn/kubernetes/apt
kubernetes-xenial InRelease [8,993 B]
Err:1 http://mirrors.ustc.edu.cn/kubernetes/apt
kubernetes-xenial InRelease
The following signatures couldn't be verified
because the public key is not available: NO_PUBKEY
6A030B21BA07F4FB
Hit:2 http://mirrors.aliyun.com/ubuntu cosmic
InRelease
Hit:3 http://mirrors.aliyun.com/ubuntu cosmic-
updates InRelease
Hit:4 http://mirrors.aliyun.com/ubuntu cosmic-
backports InRelease
Hit:5 http://mirrors.aliyun.com/ubuntu cosmic-
security InRelease
Err:6 https://mirrors.ustc.edu.cn/docker-
ce/linux/ubuntu cosmic InRelease
Could not wait for server fd - select (11:
Resource temporarily unavailable) [IP:
202.141.176.110 443]
Reading package lists... Done
W: GPG error:
http://mirrors.ustc.edu.cn/kubernetes/apt
kubernetes-xenial InRelease: The following
signatures couldn't be verified because the public
key is not available: NO_PUBKEY 6A030B21BA07F4FB
E: The repository
'http://mirrors.ustc.edu.cn/kubernetes/apt
kubernetes-xenial InRelease' is not signed.
N: Updating from such a repository can't be done
securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository
creation and user configuration details.
其中:
The following signatures couldn't be verified
because the public key is not available: NO_PUBKEY
6A030B21BA07F4FB
签名认证失败,需要重新⽣生成。记住上⾯面的NO_PUBKEY
6A030B21BA07F4FB


添加认证key
运⾏行行如下命令,添加错误中对应的key(错误中NO_PUBKEY后⾯面
的key的后8位)

gpg --keyserver keyserver.ubuntu.com --recv-keys BA07F4FB

接着运⾏行行如下命令,确认看到OK,说明成功,之后进⾏行行安装:

gpg --export --armor BA07F4FB | sudo apt-key add -

sudo apt update

12.安装k8s

apt update && apt-get install -y kubelet=1.20.1-00 kubernetes-cni=0.8.7-00 kubeadm=1.20.1-00 kubectl=1.20.1-00

13.初始化并且启动

关闭swap
# 成功
$ sudo swapoff -a
# 永久关闭swap分区
$ sudo sed -i 's/.*swap.*/#&/' /etc/fstab

主节点安装k8s基础环境
安装Kubernetes ⽬目前安装版本 v1.13.1

mkdir -p /home/glory/working
cd /home/glory/working/
apt-get install kubectl kubelet kubeadm && systemctl enable kubelet && systemctl start kubelet

14.以上内容三个节点都需要安装

15.主节点安装k8s

安装k8s

kubeadm init --kubernetes-version=1.20.0  \
--apiserver-advertise-address=172.17.93.204   \
--image-repository registry.aliyuncs.com/google_containers  \
--service-cidr=10.10.0.0/16 --pod-network-cidr=10.122.0.0/16

16.更更多kubeadm配置⽂文件参数详⻅见

kubeadm config print-defaults

17.k8s启动成功输出内容较多,但是记住末尾的内容

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.17.93.204:6443 --token u87mbu.jg2kvejo5r8cjwsm \
    --discovery-token-ca-cert-hash sha256:9bb29a3b13f12b6dc58730cc45fbb13ae67500267e5c4e89a86f960d7e1c3481 

kubeadm join 172.17.93.204:6443 --token u87mbu.jg2kvejo5r8cjwsm \
    --discovery-token-ca-cert-hash sha256:9bb29a3b13f12b6dc58730cc45fbb13ae67500267e5c4e89a86f960d7e1c3481

18.按照官⽅方提示,执⾏行行以下操作。

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

19.创建系统服务并启动

# 启动kubelet 设置为开机⾃自启动
$ sudo systemctl enable kubelet
# 启动k8s服务程序
$ sudo systemctl start kubelet

20.验证输⼊入,注意显示master状态是 NotReady ,证明初始化服务器器成功

kubectl get nodes

NAME STATUS ROLES AGE VERSION
master NotReady master 12m v1.13.1

21.查看当前k8s集群状态

$ kubectl get cs
NAME STATUS MESSAGE
ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}

⽬目前只有⼀一个master,还没有node,⽽而且是NotReady状态,那
么我们需要将node加⼊入到master管理理的集群中来。在加⼊入之前,
我们需要先配置k8s集群的内部通信⽹网络,这⾥里里采⽤用的是calico网
络。

22.添加其他节点到k8s集群中

其他节点分布执行
主节点生成的

kubeadm join 172.17.93.204:6443 --token u87mbu.jg2kvejo5r8cjwsm \
    --discovery-token-ca-cert-hash sha256:9bb29a3b13f12b6dc58730cc45fbb13ae67500267e5c4e89a86f960d7e1c3481

23.开启calico服务

主需要在主节点执行

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

24.等待安装完成

root@iZ2ze3rugpmg6ym2u7ntpoZ:/home/glory/working# kubectl get pods -o wide --all-namespaces
NAMESPACE     NAME                                              READY   STATUS     RESTARTS   AGE     IP              NODE                      NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-744cfdf676-8m26q          0/1     Pending    0          21s     <none>          <none>                    <none>           <none>
kube-system   calico-node-4k5w2                                 0/1     Init:2/3   0          21s     172.17.93.208   iz2zej54990oq4ayss6nrkz   <none>           <none>
kube-system   calico-node-6d2bx                                 0/1     Init:2/3   0          22s     172.17.93.204   iz2ze3rugpmg6ym2u7ntpoz   <none>           <none>
kube-system   calico-node-7sctg                                 0/1     Init:2/3   0          21s     172.17.93.207   iz2zej54990oq4ayss6nrjz   <none>           <none>
kube-system   coredns-7f89b7bc75-4khb8                          0/1     Pending    0          3m36s   <none>          <none>                    <none>           <none>
kube-system   coredns-7f89b7bc75-r6rf2                          0/1     Pending    0          3m36s   <none>          <none>                    <none>           <none>
kube-system   etcd-iz2ze3rugpmg6ym2u7ntpoz                      1/1     Running    0          3m44s   172.17.93.204   iz2ze3rugpmg6ym2u7ntpoz   <none>           <none>
kube-system   kube-apiserver-iz2ze3rugpmg6ym2u7ntpoz            1/1     Running    0          3m44s   172.17.93.204   iz2ze3rugpmg6ym2u7ntpoz   <none>           <none>
kube-system   kube-controller-manager-iz2ze3rugpmg6ym2u7ntpoz   1/1     Running    0          3m44s   172.17.93.204   iz2ze3rugpmg6ym2u7ntpoz   <none>           <none>
kube-system   kube-proxy-bt5lf                                  1/1     Running    0          60s     172.17.93.207   iz2zej54990oq4ayss6nrjz   <none>           <none>
kube-system   kube-proxy-c55bx                                  1/1     Running    0          47s     172.17.93.208   iz2zej54990oq4ayss6nrkz   <none>           <none>
kube-system   kube-proxy-hf7jb                                  1/1     Running    0          3m36s   172.17.93.204   iz2ze3rugpmg6ym2u7ntpoz   <none>           <none>
kube-system   kube-scheduler-iz2ze3rugpmg6ym2u7ntpoz            1/1     Running    0          3m44s   172.17.93.204   iz2ze3rugpmg6ym2u7ntpoz   <none>           <none>

等待所有的都变成ready

root@iZ2ze3rugpmg6ym2u7ntpoZ:/home/glory/working# kubectl get pods -o wide --all-namespaces
NAMESPACE     NAME                                              READY   STATUS    RESTARTS   AGE     IP               NODE                      NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-744cfdf676-8m26q          1/1     Running   0          87s     10.122.56.65     iz2zej54990oq4ayss6nrjz   <none>           <none>
kube-system   calico-node-4k5w2                                 1/1     Running   0          87s     172.17.93.208    iz2zej54990oq4ayss6nrkz   <none>           <none>
kube-system   calico-node-6d2bx                                 1/1     Running   0          88s     172.17.93.204    iz2ze3rugpmg6ym2u7ntpoz   <none>           <none>
kube-system   calico-node-7sctg                                 1/1     Running   0          87s     172.17.93.207    iz2zej54990oq4ayss6nrjz   <none>           <none>
kube-system   coredns-7f89b7bc75-4khb8                          1/1     Running   0          4m42s   10.122.56.66     iz2zej54990oq4ayss6nrjz   <none>           <none>
kube-system   coredns-7f89b7bc75-r6rf2                          1/1     Running   0          4m42s   10.122.135.193   iz2zej54990oq4ayss6nrkz   <none>           <none>
kube-system   etcd-iz2ze3rugpmg6ym2u7ntpoz                      1/1     Running   0          4m50s   172.17.93.204    iz2ze3rugpmg6ym2u7ntpoz   <none>           <none>
kube-system   kube-apiserver-iz2ze3rugpmg6ym2u7ntpoz            1/1     Running   0          4m50s   172.17.93.204    iz2ze3rugpmg6ym2u7ntpoz   <none>           <none>
kube-system   kube-controller-manager-iz2ze3rugpmg6ym2u7ntpoz   1/1     Running   0          4m50s   172.17.93.204    iz2ze3rugpmg6ym2u7ntpoz   <none>           <none>
kube-system   kube-proxy-bt5lf                                  1/1     Running   0          2m6s    172.17.93.207    iz2zej54990oq4ayss6nrjz   <none>           <none>
kube-system   kube-proxy-c55bx                                  1/1     Running   0          113s    172.17.93.208    iz2zej54990oq4ayss6nrkz   <none>           <none>
kube-system   kube-proxy-hf7jb                                  1/1     Running   0          4m42s   172.17.93.204    iz2ze3rugpmg6ym2u7ntpoz   <none>           <none>
kube-system   kube-scheduler-iz2ze3rugpmg6ym2u7ntpoz            1/1     Running   0          4m50s   172.17.93.204    iz2ze3rugpmg6ym2u7ntpoz   <none>           <none>

25.查看node 集群安装成功

root@iZ2ze3rugpmg6ym2u7ntpoZ:/home/glory/working# kubectl get nodes -o wide --all-namespaces
NAME                      STATUS   ROLES                  AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
iz2ze3rugpmg6ym2u7ntpoz   Ready    control-plane,master   5m56s   v1.20.1   172.17.93.204   <none>        Ubuntu 20.04.1 LTS   5.4.0-54-generic   docker://20.10.1
iz2zej54990oq4ayss6nrjz   Ready    <none>                 3m3s    v1.20.1   172.17.93.207   <none>        Ubuntu 20.04.1 LTS   5.4.0-54-generic   docker://20.10.1
iz2zej54990oq4ayss6nrkz   Ready    <none>                 2m50s   v1.20.1   172.17.93.208   <none>        Ubuntu 20.04.1 LTS   5.4.0-54-generic   docker://20.10.1

查看全部节点

kubectl get pod -o wide --all-namespaces
kubectl get po -A

37.测试部署应用

注意:yaml文件直接写内容 可能会错位,可以先变成其他格式的文件 在改成yaml

vi nginx.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-server
  labels:
      app: nginx    
spec:
  containers:
  - name: nginx
    image: nginx:1.17.2-alpine
vi nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: web
spec:
  type: NodePort
  ports:
    - port: 7878
      targetPort: 80
      protocol: TCP
      name: web80
      nodePort: 32333
  selector:
    app: nginx

通过任意一个节点都可访问

curl 172.17.93.204:32333
curl 172.17.93.207:32333
curl 172.17.93.208:32333
  • 2
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 4
    评论
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值