单机版K8s部署

单机版K8s部署(CentoOs8 64)

一配置详情

CPU:4核
内存:2g
系统: CentOS 7.9. 64位(输入这样查看centos 命令
cat /etc/redhat-release)
docker版本: Docker version 19.03.13
kubelet-1.19.4
kubeadm-1.19.4
kubectl-1.19.4
静态固定ip或者外网(动态ip会变,k8s会失效)

二.环境准备

Docker 安装请参考

  1. https://blog.csdn.net/qq_29956725/article/details/88343225
  2. 设置K8s的一些基础条件:
#关闭防火墙
systemctl stop firewalld
#关闭selinux
setenforce 0
#禁止swap分区
swapoff -a 
#桥接的IPV4流量传递到iptables 的链
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system  #生效
#配置k8s yum源
cat >/etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
  1. docker下载K8s所需要的源:
#查看k8s所需镜像

解决办法就是先把k8s所需的镜像先全部从docker阿里云镜像下载下来,然后在修改tag为k8s所需要的的镜像
在这里插入图片描述
如果我们安装完之后可以用命令 kubeadm config images list 来查看
创建k8s.sh脚本,下载所需镜像并更改tag

#!/bin/bash
images=(
    kube-apiserver:v1.19.4
    kube-controller-manager:v1.19.4
    kube-scheduler:v1.19.4
    kube-proxy:v1.19.4
    pause:3.2
    etcd:3.4.13-0
    coredns:1.7.0
)
for imageName in ${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName}
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName} k8s.gcr.io/${imageName}
    docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName}
done

#执行脚本
./k8s.sh

在这里插入图片描述

接着我们给赋权限
sudo chmod -R 777 k8s.sh
#查看镜像是否下载好并改好tag
docker images

在这里插入图片描述
4. 下载CNI网络插件,这里选用flanne,由于国内不能访问l

curl https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml >>kube-flannel.yml
chmod 777 kube-flannel.yml 
kubectl apply -f kube-flannel.yml 

可以参考这篇文章进行设置
https://blog.csdn.net/chen_haoren/article/details/108580338

vim /etc/hosts
199.232.68.133 raw.githubusercontent.com

这里是下载好的kube-flannel.yml,其内容为:

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.13.1-rc1
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.13.1-rc1
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

由于Flannel依赖flannel:v0.13.1-rc1,我们紧接着下载这个Docker images,上面中有一个网段需要注意10.244.0.0/16,这个一会儿必须和初始化的时候相同,记得给

kube-flannel.yml 授权
chmod 777 kube-flannel.yml

手动下载flanneld-v0.13.1-rc1-amd64.docker

wget https://github.com/coreos/flannel/releases/download/v0.13.1-rc1/flanneld-v0.13.1-rc1-amd64.docker
docker load < flanneld-v0.13.1-rc1-amd64.docker
  1. 修改dockerDaemon默认分组,由于此版本K8s使用的cgroupdriver为cgroupfs,
    所以让docker也为这个,切记修改为system,
    “exec-opts”: [“native.cgroupdriver=cgroupfs”],如果要修改为system的话,
    K8s需要多处改动,修改完之后,重启docker
vim /etc/docker/daemon.json
{
  "registry-mirrors": ["https://faa1mjpp.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=cgroupfs"]
}
执行以下命令重启docker
sudo systemctl restart docker

三 开始安装kubeadm,kubelet,kubectl

#安装kubeadm(初始化cluster),kubelet(启动pod)和kubectl(k8s命令工具)
yum install -y kubelet-1.19.4
yum install -y kubeadm-1.19.4
yum install -y kubectl-1.19.4

#设置开机启动并启动kubelet
systemctl enable kubelet && systemctl start kubelet

此时kubeadm,kubectl 命令则可以用了

四 初始化集群

切记pod-network要和flannel在同一个网段中,当然pod-network和
service-cid这两个都可以设置172段,都是虚拟ip,具体可百度

#初始化Master
kubeadm init --kubernetes-version=v1.19.4 --ignore-preflight-errors=NumCPU --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16 --v=6

在这里插入图片描述
但是也还会有默认的其他错误

在这里插入图片描述
其实在这之后我们会修改K8的一些默认配置文件,这样错误就会不见

按照成功过后的提示执行下面语句

#使用kubectl
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

不执行这三条语句,在使用kubectl时会报错The connection to the server 127.0.0.1:8080 was refused - did you specify the right host or port?.

#查看组件状态
kubectl get cs

在这里插入图片描述

我们可以参考下这篇文章
解决kubernetes:v1.18.6-1.19.0 get cs127.0.0.1 connection refused错误
为以下内容
kube-controller-manager.yaml文件修改:注释掉27行

  1 apiVersion: v1
  2 kind: Pod
  3 metadata:
  4   creationTimestamp: null
  5   labels:
  6     component: kube-controller-manager
  7     tier: control-plane
  8   name: kube-controller-manager
  9   namespace: kube-system
 10 spec:
 11   containers:
 12   - command:
 13     - kube-controller-manager
 14     - --allocate-node-cidrs=true
 15     - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
 16     - --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
 17     - --bind-address=127.0.0.1
 18     - --client-ca-file=/etc/kubernetes/pki/ca.crt
 19     - --cluster-cidr=10.244.0.0/16
 20     - --cluster-name=kubernetes
 21     - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
 22     - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
 23     - --controllers=*,bootstrapsigner,tokencleaner
 24     - --kubeconfig=/etc/kubernetes/controller-manager.conf
 25     - --leader-elect=true
 26     - --node-cidr-mask-size=24
 27   #  - --port=0
 28     - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
 29     - --root-ca-file=/etc/kubernetes/pki/ca.crt
 30     - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
 31     - --service-cluster-ip-range=10.1.0.0/16
 32     - --use-service-account-credentials=true

kube-scheduler.yaml配置修改:注释掉19行

 1 apiVersion: v1
  2 kind: Pod
  3 metadata:
  4   creationTimestamp: null
  5   labels:
  6     component: kube-scheduler
  7     tier: control-plane
  8   name: kube-scheduler
  9   namespace: kube-system
 10 spec:
 11   containers:
 12   - command:
 13     - kube-scheduler
 14     - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
 15     - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
 16     - --bind-address=127.0.0.1
 17     - --kubeconfig=/etc/kubernetes/scheduler.conf
 18     - --leader-elect=true
 19   #  - --port=0
vim /etc/kubernetes/manifests/kube-controller-manager.yaml
vim /etc/kubernetes/manifests/kube-scheduler.yaml
#注释完重启服务
systemctl restart kubelet.service
#查看组件状态
kubectl get cs

在这里插入图片描述
查看node节点:
在这里插入图片描述
这是因为我们还没有执行Flannel,所以执行Flannel

chmod 777 kube-flannel.yml 
kubectl apply -f kube-flannel.yml 

等个1分钟左右,再次查看节点状态

kubectl get node

在这里插入图片描述
接着我们查看一些网络初始化日志
参考默认的所有pod运行状

kubectl get pods -n kube-system -o wide

在这里插入图片描述
如果某个节点有问题,我们可以进行这样查看

kubectl log -f coredns-5c98db65d4-8wt9z -n kube-system

将master加入到node节点中,参考网址

kubectl taint nodes --all node-role.kubernetes.io/master-

初始化报错
Warning FailedScheduling 54s default-scheduler 0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn’t tolerate
使用kubeadm初始化的集群,出于安全考虑Pod不会被调度到Master Node上,不参与工作负载。允许master节点部署pod即可解决问题
https://blog.csdn.net/CEVERY/article/details/109104447

查询日志错误命令

#查看log
journalctl -f -u kubelet.service

``
如果还有其他的错误,如coredns状态重启,查看日志,有可能是网络不通,查看cni0和flannel是否在同一个网络断中,如果不是,则需要修改
`

```powershell
ifconfig

在这里插入图片描述
k8s安装kubernetes-dashboard 一直报错 dial tcp 10.96.0.1:443: i/o timeout类似于这种

sudo ifconfig cni0 down    
sudo ip link delete cni0

参考网址:
https://blog.csdn.net/ibless/article/details/107899009
当然还有其他的问题,比如codeDNS重启,需要修改codeDNS里面的yml文件的
loop注释掉
待一切就绪后,K8s基本上单机版搭建完毕了

五、创建pod

kubectl create deployment nginx --image=nginx
#查看pod状态,变为Running才可继续执行
kubectl get pod
#如果一直没有变为Running,查看启动日志
kubectl describe pod {pod名称}
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pod,svc

在这里插入图片描述
在这里插入图片描述

至此,我们的K8s算是完毕了

六参考网址

参考网址
1.整个状态查询
https://www.cnblogs.com/codelove/p/11466217.html

  1. default-scheduler 0/1 nodes are available(默认master不考虑调度ppd)
    https://blog.csdn.net/CEVERY/article/details/109104447

  2. kubectl get cs get cs127.0.0.1 connection refused错误
    https://blog.csdn.net/cymm_liu/article/details/108458197

  3. 博客文章参考
    https://blog.csdn.net/qq_45453266/article/details/109897843

  4. 无法下载Flanne
    https://blog.csdn.net/chen_haoren/article/details/108580338

  5. Etcd报错 var/lib/ectd is empty
    执行命令rm -rf /var/lib/etcd
    https://blog.csdn.net/qq_39346534/article/details/107630835

  6. 如果发现网络不通,则需要删掉网络,参考
    https://blog.csdn.net/ibless/article/details/107899009

七 常见的命令

  kubectl get pods -n kube-system -o wide
  systemctl restart kubelet.service
  journalctl -f -u kubelet.service
  systemctl  enable kubelet && systemctl start kubelet
  sudo chmod -R 777 k8s.sh
  
  kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.19.4 --ignore-preflight-errors=NumCPU --service-cidr=10.96.0.0/12 --v=6
  
  systemctl restart kubelet.service
  sudo journalctl -xe | grep cni
  kubectl apply -f kube-flannel.yml 
  docker load < flanneld-v0.13.1-rc1-amd64.docker
   
  kubectl get pods  kube-system
  kubectl get pods -n kube-system
  
  kubectl logs -f coredns-5c98db65d4-8wt9z -n kube-system
  kubectl logs -f eplicaset.apps/coredns-f9fd979d6  kube-system 
  kubectl describe pods -n kube-system coredns-f9fd979d6-gfg4k
  kubectl delete pod coredns-xxx-xxxx     -n kube-system
  kubectl log -f coredns-5c98db65d4-8wt9z -n kube-system 
  kubectl logs -f kube-flannel-ds-amd64-hl89n -n kube-system
  journalctl -f -u kubelet.service 
  journalctl -u kubelet -n 1000
  
 rm -f /etc/kubernetes/manifests/kube-controller-manager.yaml
 rm -f /etc/kubernetes/manifests/kube-apiserver.yaml
 rm -f etc/kubernetes/manifests/kube-scheduler.yaml
 rm -f etc/kubernetes/manifests/etcd.yaml
 rm -r /etc/kubernetes/
 
 linux下卸载docker
 卸载

1.查询安装过的包

yum list installed | grep docker

docker-engine.x86_64                 17.03.0.ce-1.el7.centos         @dockerrepo

2.删除安装的软件包

yum -y remove docker-engine.x86_64

3.删除镜像/容器等

rm -rf /var/lib/docker

e!  放弃修改,重新回到文件打开时的状态

创建K8s danshboard权限
https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

https://blog.csdn.net/zhangbaoxiang/article/details/106559533

查看权限:
kubectl   describe  secrets  -n kubernetes-dashboard
eyJhbGciOiJSUzI1NiIsImtpZCI6IjFMMGRGT1kzRnlnN0lxU0VJZFgtWE9LZDJINFotLUpXclRhQ3lrYzY4WWcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuLXNrOWpwIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIzNGZlMWQ2Yy03ZWMzLTRlYTgtYjZiMi04NWZjMjkyZWYwMDciLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6ZGVmYXVsdCJ9.UtkrrztQWu2oqQ--CjYyHIcgHlZa0wyDRccVtIgtxCRR0KHRU3kLZc3McBPas8WfNa-ElS2BRwAixEmTfKVkFesjFT2zOa1UC9oKlOHwoFv-7DEnvLsdSOYnWj31MKs-2L4opaj9A2VGRy5QsEmQSpjdCBphcP-H-Q1iQRITyAbw2NdOvJxtJT90L106UnryB95Gsk4LXjYzadiCoCT4yJqffPyQwuvKE2F0glvDqXOh0kuWUL7EzSwh4dKp4xjl9_2lM5FhYJgPZfwN-ewbBh5LujEWyizrKj3zLbGLIU-S2jpr9atJ2mz6-NxebAOi67vgtHx-l4It3DminYy9HA
`
端口:30000``



  • 1
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值