kubernetes学习笔记1-----搭建k8s集群

搭建Kubernetes环境

1、前提准备
1.1、使用版本
  • Docker 18.09.0
  • kubeadm-1.14.0-0
  • kubelet-1.14.0-0
  • kubectl-1.14.0-0
    • k8s.gcr.io/kube-apiserver:v1.14.0
    • k8s.gcr.io/kube-controller-manager:v1.14.0
    • k8s.gcr.io/kube-scheduler:v1.14.0
    • k8s.gcr.io/kube-proxy:v1.14.0
    • k8s.gcr.io/pause:3.1
    • k8s.gcr.io/etcd:3.3.10
    • k8s.gcr.io/coredns:1.3.1
  • calico:v3.9
1.2、准备环境
服务器IPCPU内存
master-kubeadm-k8s172.16.90.702C4G
worker01-kubeadm-k8s172.16.90.712C4G
worker02-kubeadm-k8s172.16.90.722C4G
1.3、关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
1.4、关闭selinux
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
1.5、关闭swap
swapoff -a
sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab
1.6、配置iptables的ACCEPT规则
iptables -F && iptables -X && iptables \
-F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
1.7、设置系统参数
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
2、安装依赖
yum -y update
yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp
3、安装Docker

Docker学习笔记01-----CentOS Docker安装

4、修改hosts文件
vi /etc/hosts
172.16.90.70 master-kubeadm-k8s
172.16.90.71 worker01-kubeadm-k8s
172.16.90.72 worker02-kubeadm-k8s
5、安装kubeadm、kubelet、kubectl
5.1、配置yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
5.2、安装
  • 查看已安装版本
rpm -qa | grep -i kubeadm
rpm -qa | grep -i kubelet
rpm -qa | grep -i kubectl
rpm -qa | grep -i kubernetes-cni
  • 卸载已经安装的版本
sudo rpm -e --nodeps kubeadm
sudo rpm -e --nodeps kubelet
sudo rpm -e --nodeps kubectl
# kubelet依赖kubernetes-cni如果版本太高不兼容
sudo rpm -e kubernetes-cni-0.8.7-0.x86_64
  • 安装
yum install -y kubectl-1.14.0-0
yum install -y kubelet-1.14.0-0
yum install -y kubeadm-1.14.0-0
5.3、docker和k8s设置同一个cgroup
# docker
vi /etc/docker/daemon.json
{
	"exec-opts": ["native.cgroupdriver=systemd"]
}
systemctl restart docker

# kubelet, 提示“No such file or directory”正常

[root@master-kubeadm-k8s docker]# sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
sed: can't read /etc/systemd/system/kubelet.service.d/10-kubeadm.conf: No such file or directory

[root@master-kubeadm-k8s docker]# systemctl enable kubelet && systemctl start kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
6、proxy/pause/scheduler等国内镜像
6.1、查看kubeadm使用的镜像
[root@master-kubeadm-k8s docker]# kubeadm config images list
I0324 04:25:13.308657    6757 version.go:240] remote version is much newer: v1.20.5; falling back to: stable-1.14
k8s.gcr.io/kube-apiserver:v1.14.10
k8s.gcr.io/kube-controller-manager:v1.14.10
k8s.gcr.io/kube-scheduler:v1.14.10
k8s.gcr.io/kube-proxy:v1.14.10
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1

使用的都是国外镜像无法使用

6.2、解决国外镜像不能访问的问题
  • 创建kubeadm.sh脚本,用于拉取镜像/打tag/删除原有镜像
#!/bin/bash

set -e

KUBE_VERSION=v1.14.0
KUBE_PAUSE_VERSION=3.1
ETCD_VERSION=3.3.10
CORE_DNS_VERSION=1.3.1

GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers

images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION})

for imageName in ${images[@]} ; do
  docker pull $ALIYUN_URL/$imageName
  docker tag  $ALIYUN_URL/$imageName $GCR_URL/$imageName
  docker rmi $ALIYUN_URL/$imageName
done
  • 运行脚本
sh ./kubeadm.sh
  • 查看镜像
[root@master-kubeadm-k8s ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.14.0             5cd54e388aba        24 months ago       82.1MB
k8s.gcr.io/kube-scheduler            v1.14.0             00638a24688b        24 months ago       81.6MB
k8s.gcr.io/kube-controller-manager   v1.14.0             b95b1efa0436        24 months ago       158MB
k8s.gcr.io/kube-apiserver            v1.14.0             ecf910f40d6e        24 months ago       210MB
k8s.gcr.io/coredns                   1.3.1               eb516548c180        2 years ago         40.3MB
k8s.gcr.io/etcd                      3.3.10              2c4adeb21b4f        2 years ago         258MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        3 years ago         742kB
6.3、将本地镜像推送到Harbor镜像仓库
  • 配置/etc/docker/daemon.json
vi /etc/docker/daemon.json
{
    "insecure-registries": ["127.0.0.1:4443"]
}
  • 登录Harbor仓库
[root@master-kubeadm-k8s ~]# docker login https://127.0.0.1:4443
Username: dev_user
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
  • 创建脚本
[root@master-kubeadm-k8s ~]# vi kubeadm-push-harbor.sh

#!/bin/bash

set -e

KUBE_VERSION=v1.14.0
KUBE_PAUSE_VERSION=3.1
ETCD_VERSION=3.3.10
CORE_DNS_VERSION=1.3.1

GCR_URL=k8s.gcr.io
# 此处注意一定不能加https://
HARBOR_URL=127.0.0.1:4443/k8s_pub

images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION})

for imageName in ${images[@]} ; do
  docker tag $GCR_URL/$imageName $HARBOR_URL/$imageName
  docker push $HARBOR_URL/$imageName
  docker rmi $HARBOR_URL/$imageName
done
  • 执行脚本
sh ./kubeadm-push-harbor.sh
  • 查看仓库

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-ZR1F5lhZ-1616580495899)(7C04DED5E1C2476BAC2246491C91C7DD)]

7、部署master、worker节点

https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

7.1、初始化master节点

重新部署需要先重置

kubeadm reset
  • 执行脚本
[root@master-kubeadm-k8s ~]# kubeadm init --kubernetes-version=1.14.0 --apiserver-advertise-address=172.16.90.70 --pod-network-cidr=172.16.0.0/16
[init] Using Kubernetes version: v1.14.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master-kubeadm-k8s kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.90.70]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master-kubeadm-k8s localhost] and IPs [172.16.90.70 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master-kubeadm-k8s localhost] and IPs [172.16.90.70 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.505230 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node master-kubeadm-k8s as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master-kubeadm-k8s as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 893xaf.kezrdaf870kznas5
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.90.70:6443 --token 893xaf.kezrdaf870kznas5 \
    --discovery-token-ca-cert-hash sha256:5e7742d86932baa3b7143aad3a01264fe4db18fa26bdb313503e291002f058a8
  • 根据日志提示执行
mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • 查看kube-system下的pod
[root@master-kubeadm-k8s ~]# kubectl get pods -n kube-system
NAME                                         READY   STATUS    RESTARTS   AGE
coredns-fb8b8dccf-mnwd4                      0/1     Pending   0          3m28s
coredns-fb8b8dccf-mxdbq                      0/1     Pending   0          3m28s
etcd-master-kubeadm-k8s                      1/1     Running   0          2m40s
kube-apiserver-master-kubeadm-k8s            1/1     Running   0          2m30s
kube-controller-manager-master-kubeadm-k8s   1/1     Running   0          2m49s
kube-proxy-9g7sz                             1/1     Running   0          3m27s
kube-scheduler-master-kubeadm-k8s            1/1     Running   0          2m46s

coredns没有启动,需要安装网络插件

  • 查看所有的pod
[root@master-kubeadm-k8s ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGE
kube-system   coredns-fb8b8dccf-mnwd4                      0/1     Pending   0          5m51s
kube-system   coredns-fb8b8dccf-mxdbq                      0/1     Pending   0          5m51s
kube-system   etcd-master-kubeadm-k8s                      1/1     Running   0          5m3s
kube-system   kube-apiserver-master-kubeadm-k8s            1/1     Running   0          4m53s
kube-system   kube-controller-manager-master-kubeadm-k8s   1/1     Running   0          5m12s
kube-system   kube-proxy-9g7sz                             1/1     Running   0          5m50s
kube-system   kube-scheduler-master-kubeadm-k8s            1/1     Running   0          5m9s
  • 查看健康情况
[root@master-kubeadm-k8s ~]# curl -k https://localhost:6443/healthz
ok
7.2、部署calico网络插件(master节点)

https://kubernetes.io/docs/concepts/cluster-administration/addons/

https://docs.projectcalico.org/v3.9/getting-started/kubernetes/

  • 查看所需镜像
[root@master-kubeadm-k8s ~]# curl https://docs.projectcalico.org/v3.9/manifests/calico.yaml | grep image
image: calico/cni:v3.9.6
image: calico/cni:v3.9.6
image: calico/pod2daemon-flexvol:v3.9.6
image: calico/node:v3.9.6
image: calico/kube-controllers:v3.9.6
  • 手工拉取镜像(很慢)
docker pull calico/cni:v3.9.6
docker pull calico/pod2daemon-flexvol:v3.9.6
docker pull calico/node:v3.9.6
docker pull calico/kube-controllers:v3.9.6
  • 将本地镜像推送到Harbar仓库
[root@master-kubeadm-k8s ~]# vi calico-push-harbor.sh

#!/bin/bash
set -e

CALICO_VERSION=v3.9.6

CALICO_URL=calico
# 此处注意一定不能加https://
HARBOR_URL=127.0.0.1:4443/calico_pub

images=(cni:${CALICO_VERSION}
pod2daemon-flexvol:${CALICO_VERSION}
node:${CALICO_VERSION}
kube-controllers:${CALICO_VERSION})

for imageName in ${images[@]} ; do
  docker tag $CALICO_URL/$imageName $HARBOR_URL/$imageName
  docker push $HARBOR_URL/$imageName
  docker rmi $HARBOR_URL/$imageName
done

执行脚本

sh ./calico-push-harbor.sh

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Wi4EYjEu-1616580495901)(93CE3AF431DB4592B08FA7DD43E5278A)]

  • 从Harbor仓库拉取镜像(推荐)
[root@master-kubeadm-k8s ~]# vi calico.sh

#!/bin/bash

set -e

CALICO_VERSION=v3.9.6

CALICO_URL=calico

HARBOR_URL=127.0.0.1:4443/calico_pub

images=(cni:${CALICO_VERSION}
pod2daemon-flexvol:${CALICO_VERSION}
node:${CALICO_VERSION}
kube-controllers:${CALICO_VERSION})

for imageName in ${images[@]} ; do
  docker pull $HARBOR_URL/$imageName
  docker tag  $HARBOR_URL/$imageName $CALICO_URL/$imageName
  docker rmi $HARBOR_URL/$imageName
done

执行脚本

sh ./calico.sh
  • 在k8s中安装calico
yum install -y wget
wget https://docs.projectcalico.org/v3.9/manifests/calico.yaml
[root@master-kubeadm-k8s ~]# kubectl apply -f calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
  • 确认calico是否安装成功
[root@master-kubeadm-k8s ~]# kubectl get pods --all-namespaces -w
NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-f67d5b96f-zg28g      1/1     Running   0          2m25s
kube-system   calico-node-hz7p6                            1/1     Running   0          2m25s
kube-system   coredns-fb8b8dccf-mnwd4                      1/1     Running   0          67m
kube-system   coredns-fb8b8dccf-mxdbq                      1/1     Running   0          67m
kube-system   etcd-master-kubeadm-k8s                      1/1     Running   0          66m
kube-system   kube-apiserver-master-kubeadm-k8s            1/1     Running   0          66m
kube-system   kube-controller-manager-master-kubeadm-k8s   1/1     Running   0          66m
kube-system   kube-proxy-9g7sz                             1/1     Running   0          67m
kube-system   kube-scheduler-master-kubeadm-k8s            1/1     Running   0          66m
8、kube join
8.1、在worker01、worker02节点上执行命令
kubeadm join 172.16.90.70:6443 --token 893xaf.kezrdaf870kznas5 \
    --discovery-token-ca-cert-hash sha256:5e7742d86932baa3b7143aad3a01264fe4db18fa26bdb313503e291002f058a8
8.2、在master节点上查看集群信息
[root@master-kubeadm-k8s ~]# kubectl get nodes
NAME                   STATUS     ROLES    AGE   VERSION
master-kubeadm-k8s     Ready      master   70m   v1.14.0
worker01-kubeadm-k8s   NotReady   <none>   9s    v1.14.0
worker02-kubeadm-k8s   NotReady   <none>   16s   v1.14.0
8.3、K8S集群节点退出
# 先将节点设置为维护模式
kubectl drain worker01-kubeadm-k8s --delete-local-data --force --ignore-daemonsets node/worker01-kubeadm-k8s
# 删除节点
kubectl delete node worker01-kubeadm-k8s
8.4、节点重新加入
kubeadm reset
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
systemctl start docker

# 加入集群
kubeadm join 172.16.90.70:6443 --token 893xaf.kezrdaf870kznas5 \
    --discovery-token-ca-cert-hash sha256:5e7742d86932baa3b7143aad3a01264fe4db18fa26bdb313503e291002f058a8
8.5、查看节点日志
journalctl -f -u kubelet
8.6、获取master的join token
[root@master-kubeadm-k8s ~]# kubeadm token create --print-join-command
kubeadm join 172.16.90.70:6443 --token o2khr1.y82xigbj0p0yv9pd     --discovery-token-ca-cert-hash sha256:5e7742d86932baa3b7143aad3a01264fe4db18fa26bdb313503e291002f058a8
9、使用k8s集群部署nginx
9.1、定义pod.yml脚本
cat > pod_nginx.yml <<EOF
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: nginx
  labels:
    tier: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      tier: frontend
  template:
    metadata:
      name: nginx
      labels:
        tier: frontend
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
EOF
9.2、根据pod_nginx.yml文件创建pod
kubectl apply -f pod_nginx.yml
9.3、查看pod
kubectl get pods
kubectl get pods -o wide
kubectl describe pod nginx
9.4、扩缩容
kubectl scale rs nginx --replicas=5
kubectl get pods -o wide
9.5、删除pod
kubectl delete -f pod_nginx.yml
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

itmrl

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值