香橙派4和树莓派4B构建K8S集群实践之一:K8S安装

目录

1. 说明

1.1 软硬件环境

1.2 设计目标

2 实现

2.1 准备工作

- 香橙派 (k8s-master-0)

- 树莓派 (k8s-worker-0)

- 两派都要干的事

2.2 containerd 安装与设置

代理设置

2.3 安装 

- 在k8s-master-0上安装 Flannel 网络插件

- master 节点加入

- worker 节点加入 

- 安装Helm

2.4 安装脚本

3 遇到的问题

3.1 k8s-master-0

3.2 k8s-worker-0 

4 相关命令

5 Tips

5.1 Pod状态解释

5.2 Pod的代理访问设置

5.3 DockerHub访问密钥设置 

6 参考


1. 说明

1.1 软硬件环境

k8s-master-0192.168.0.103Ubuntu 22.04香橙派5B8G / 8核 / 256G TF卡控制 节点
k8s-master-1192.168.0.106Ubuntu 22.04香橙派4LTS4G / 6核 / 256G TF卡控制 节点
k8s-worker-0192.168.0.104

Ubuntu MATE 22.04.2 LTS

Raspi OS(Debian 11) 已弃,因内核没集成ceph

树莓派4B4G / 4核 / 256G TF卡工作 节点

1.2 设计目标

  • 实现K8s集群 (基于containerd V1.62和K8s V1.27)
  • 两个master,一个worker

2 实现

2.1 准备工作

- 香橙派 (k8s-master-0)

#加源,华为云与阿里云二选一

华为云:

cat > /etc/apt/sources.list <<EOF
deb http://repo.huaweicloud.com/ubuntu-ports/ jammy main restricted universe multiverse
# deb-src http://repo.huaweicloud.com/ubuntu-ports/ jammy main restricted universe multiverse
deb http://repo.huaweicloud.com/ubuntu-ports/ jammy-security main restricted universe multiverse
# deb-src http://repo.huaweicloud.com/ubuntu-ports/ jammy-security main restricted universe multiverse
deb http://repo.huaweicloud.com/ubuntu-ports/ jammy-updates main restricted universe multiverse
# deb-src http://repo.huaweicloud.com/ubuntu-ports/ jammy-updates main restricted universe multiverse
deb http://repo.huaweicloud.com/ubuntu-ports/ jammy-backports main restricted universe multiverse
# deb-src http://repo.huaweicloud.com/ubuntu-ports/ jammy-backports main restricted universe multiverse
EOF

阿里云 Ubuntu ARM源(注意体系对应目录:amd64<>ubuntu, arm64<>ubuntu-ports):

cat > /etc/apt/sources.list <<EOF
deb http://mirrors.aliyun.com/ubuntu-ports/ jammy main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu-ports/ jammy main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu-ports/ jammy-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu-ports/ jammy-updates main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu-ports/ jammy-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu-ports/ jammy-security main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu-ports/ jammy-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu-ports/ jammy-backports main restricted universe multiverse 
EOF

 #添加加载的内核模块

tee /etc/modules-load.d/containerd.conf<<EOF
overlay
br_netfilter
EOF

 #加载内核模块

modprobe overlay && modprobe br_netfilter

 #设置并应用内核参数

tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

sysctl --system

- 树莓派 (k8s-worker-0)

#加源

cat > /etc/apt/sources.list <<EOF
#将文件内容用以下内容替换,换上科大源
deb https://mirrors.ustc.edu.cn/debian/ buster main contrib non-free
# deb-src http://mirrors.ustc.edu.cn/debian buster main contrib non-free
deb https://mirrors.ustc.edu.cn/debian/ buster-updates main contrib non-free
# deb-src http://mirrors.ustc.edu.cn/debian buster-updates main contrib non-free
deb https://mirrors.ustc.edu.cn/debian-security buster/updates main contrib non-free
# deb-src http://mirrors.ustc.edu.cn/debian-security/ buster/updates main non-free contrib
#将文件内容用以下内容替换,换上清华源(针对aarch64用户)
deb https://mirrors.tuna.tsinghua.edu.cn/debian/ bullseye main contrib non-free
# deb-src https://mirrors.tuna.tsinghua.edu.cn/debian/ bullseye main contrib non-free
deb https://mirrors.tuna.tsinghua.edu.cn/debian/ bullseye-updates main contrib non-free
# deb-src https://mirrors.tuna.tsinghua.edu.cn/debian/ bullseye-updates main contrib non-free
deb https://mirrors.tuna.tsinghua.edu.cn/debian/ bullseye-backports main contrib non-free
# deb-src https://mirrors.tuna.tsinghua.edu.cn/debian/ bullseye-backports main contrib non-free
deb https://mirrors.tuna.tsinghua.edu.cn/debian-security bullseye-security main contrib non-free
# deb-src https://mirrors.tuna.tsinghua.edu.cn/debian-security bullseye-security main contrib non-free
EOF

- 两派都要干的事

  修改/etc/hosts文件

192.168.0.103 k8s-master-0
192.168.0.106 k8s-master-1
192.168.0.104 k8s-worker-0
199.232.28.133 raw.githubusercontent.com # 以便kubectl apply时能找到

加k8s源 

curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat > /etc/apt/sources.list.d/kubernetes.list <<EOF
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

 检查更新及安装更新

apt update && apt upgrade -y

 安装所需附件

apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates

2.2 containerd 安装与设置

#启用 docker 存储库

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmour -o /etc/apt/trusted.gpg.d/docker.gpg

## ubuntu
# 支持x86架构64位cpu
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
# 支持arm64架构cpu
add-apt-repository "deb [arch=arm64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

## debian
# 支持x86架构64位cpu
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
# 支持arm64架构cpu
add-apt-repository "deb [arch=arm64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"

apt update && apt install -y containerd.io

#生成containerd的配置文件

containerd config default | tee /etc/containerd/config.toml >/dev/null 2>&1

 #修改cgroup Driver为systemd

sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml

 #编辑 /etc/containerd/config.toml,修改镜像路径

 #sandbox_image = "registry.k8s.io/pause:3.6"
 =>
 sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
systemctl daemon-reload
systemctl start containerd
systemctl enable containerd.service

CTR容器代理设置,避免镜像发生拉取问题

编辑 /lib/systemd/system/containerd.service

[Service]
Environment="HTTP_PROXY=http://192.168.0.108:1081"
Environment="HTTPS_PROXY=http://192.168.0.108:1081"
Environment="NO_PROXY=aliyun.com,aliyuncs.com,huaweicloud.com,k8s-master-0,k8s-master-1,k8s-worker-0,localhost,127.0.0.1,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"
systemctl daemon-reload && systemctl restart containerd

2.3 安装 

临时关闭Swap, 在我的香橙派中,重启后swap分区又会出来,(试过很多方法都不行),简直是打不死的小强,后期在配置文件(/etc/systemd/system/kubelet.service.d/10-kubeadm.conf)中添加参数--fail-swap-on=false解决,参看遇到的问题一节

# swapoff -a         # 临时关闭
# sed -ri 's/.*swap.*/#&/' /etc/fstab    # 没啥用
apt -y install kubeadm kubelet kubectl # 按最新的来玩

# 固定版本不更新(暂时如此,免得出幺蛾子)
apt-mark hold kubelet kubeadm kubectl 
systemctl enable kubelet.service

# 加入环境变量
echo "export KUBECONFIG=/etc/kubernetes/kubelet.conf" >> /etc/profile
source /etc/profile

master server初始化 (node不需要走init),这里用了区域镜像,否则等到猴年马月..

# 可用以下命令查看镜像是否能下
kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers

# 正式初始化, -- upload-certs 自动发放证书
kubeadm init --apiserver-advertise-address=192.168.0.103 \
 --pod-network-cidr=10.244.0.0/16 \
 --upload-certs \
 --image-repository registry.aliyuncs.com/google_containers \
 --control-plane-endpoint "k8s-master-0:6443"

# 遇到问题重来
kubeadm reset -f

完成后的成功提示,需记下“入群”的相关参数 

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join k8s-master-0:6443 --token nbcz9u.8bpk1cqvc0bwxgv4 \
        --discovery-token-ca-cert-hash sha256:bed2b1df5cf2bff383cb239eef274c367ae5a3aa46fcd8dd6629b47d8b40a1b3 \
        --control-plane --certificate-key 5886b50335bb1db1b7a961bac745fc3b1e2b04626c308ca96c05ca66efa8f9e4

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s-master-0:6443 --token nbcz9u.8bpk1cqvc0bwxgv4 \
        --discovery-token-ca-cert-hash sha256:bed2b1df5cf2bff383cb239eef274c367ae5a3aa46fcd8dd6629b47d8b40a1b3

 导入管理配置,不然指定用户会缺乏权限操作

mkdir -p $HOME/.kube && \
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config && \
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 加入用户启动配置
echo "export KUBECONFIG=$HOME/.kube/config" >> ~/.profile
source ~/.profile

- 在k8s-master-0上安装 Flannel 网络插件

export KUBECONFIG=/etc/kubernetes/admin.conf

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

- master 节点加入

  kubeadm join k8s-master-0:6443 --token nbcz9u.8bpk1cqvc0bwxgv4 \
        --discovery-token-ca-cert-hash sha256:bed2b1df5cf2bff383cb239eef274c367ae5a3aa46fcd8dd6629b47d8b40a1b3 \
        --control-plane --certificate-key 5886b50335bb1db1b7a961bac745fc3b1e2b04626c308ca96c05ca66efa8f9e4

- worker 节点加入 

kubeadm join k8s-master-0:6443 --token nbcz9u.8bpk1cqvc0bwxgv4 \
        --discovery-token-ca-cert-hash sha256:bed2b1df5cf2bff383cb239eef274c367ae5a3aa46fcd8dd6629b47d8b40a1b3

- 查看节点状态

orangepi@k8s-master-0:~$ kubectl get nodes
NAME           STATUS   ROLES           AGE   VERSION
k8s-master-0   Ready    control-plane   43m   v1.27.3
k8s-master-1   Ready    control-plane   41m   v1.27.3
k8s-worker-0   Ready    <none>          26m   v1.27.3

  - 如需在master/控制平面节点上调度 Pod,需移除污点:

# v1.25版本以上
kubectl taint nodes --all node-role.kubernetes.io/control-plane- 

# v1.25版本以下
kubectl taint nodes --allrole.kubernetes.io/master-

几经周折,完成nodes and pods为running状态,乌拉! 

orangepi@k8s-master-0:~$ kubectl get ingress,services,pods -A
NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  57m
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   57m

NAMESPACE      NAME                                       READY   STATUS    RESTARTS       AGE
kube-flannel   pod/kube-flannel-ds-vxf4q                  1/1     Running   1 (20m ago)    56m
kube-flannel   pod/kube-flannel-ds-wp995                  1/1     Running   1 (22m ago)    55m
kube-flannel   pod/kube-flannel-ds-zq2j7                  1/1     Running   0              39m
kube-system    pod/coredns-7bdc4cb885-8rw4l               1/1     Running   1 (20m ago)    57m
kube-system    pod/coredns-7bdc4cb885-brx7j               1/1     Running   1 (20m ago)    57m
kube-system    pod/etcd-k8s-master-0                      1/1     Running   15 (20m ago)   57m
kube-system    pod/etcd-k8s-master-1                      1/1     Running   2 (22m ago)    55m
kube-system    pod/kube-apiserver-k8s-master-0            1/1     Running   22 (20m ago)   57m
kube-system    pod/kube-apiserver-k8s-master-1            1/1     Running   3 (22m ago)    55m
kube-system    pod/kube-controller-manager-k8s-master-0   1/1     Running   22 (20m ago)   57m
kube-system    pod/kube-controller-manager-k8s-master-1   1/1     Running   2 (22m ago)    55m
kube-system    pod/kube-proxy-9hmj5                       1/1     Running   1 (20m ago)    57m
kube-system    pod/kube-proxy-l2wk2                       1/1     Running   0              39m
kube-system    pod/kube-proxy-sf9xv                       1/1     Running   1 (22m ago)    55m
kube-system    pod/kube-scheduler-k8s-master-0            1/1     Running   19 (20m ago)   57m
kube-system    pod/kube-scheduler-k8s-master-1            1/1     Running   2 (22m ago)    55m

- 安装Helm

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
$ chmod 700 get_helm.sh
$ ./get_helm.sh

2.4 安装脚本

综合以上,整理出安装脚本(参考文章附件)

执行次序:

  1. k8s-setup.sh
  2. k8s-init.sh
    注: init 需用到 https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    自行修改路径,或保存到 /k8s_apps/kube-flannel/kube-flannel.yml

(可选)手动脚本,可给其他用户赋权管理k8s

  1. k8s-grant-user.sh 

3 遇到的问题

3.1 k8s-master-0

- 如果删除不了swap交换分区,则kubelet服务会启动不来,由于K8s1.21后的版本能支持swap,所以调整参数(--fail-swap-on=false) 即可,设置方法:

cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS --fail-swap-on=false

在启动命令末尾加上: --fail-swap-on=false ,然后reload配置
systemctl daemon-reload
systemctl start kubelet

-  "The connection to the server localhost:8080 was refused - did you specify the right host or port?" 

cd /etc/kubernetes/

查看到有个文件:kubelet.conf, 执行命令
echo "export KUBECONFIG=/etc/kubernetes/kubelet.conf" >> /etc/profile
source /etc/profile

再次查看 kubectl get pods 已经正常。

原因: kubernetes master没有与本机绑定,集群初始化的时候没有绑定,此时设置在本机的环境变量即可解决问题。

3.2 k8s-worker-0 

 - 加入时,遇到提示:CGROUPS_MEMORY: missing,

解决办法:编辑 /boot/cmdline.txt,加入:

cgroup_enable=memory cgroup_memory=1

Node为NotReady状态, 日志提示:"Unable to update cni config: No networks found in /etc/cni/net.d"

解决办法: 删除 --network-plugin=cni

nano /var/lib/kubelet/kubeadm-flags.env

# KUBELET_KUBEADM_ARGS="--network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.6"
=>
KUBELET_KUBEADM_ARGS="--pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.6"

 "The following signatures couldn't be verified because the public key is not available: {key}"

解决办法

gpg --keyserver keyserver.ubuntu.com --recv  {key}
gpg --export --armor  {key} | sudo apt-key add -

 "container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"

解决办法CNI is not initialized in k8s v1.16.4 · Issue #1236 · flannel-io/flannel · GitHub  执行下面命令可马上转ready

cat <<EOL > /etc/cni/net.d/10-flannel.conflist 
{
  "name": "cbr0",
  "cniVersion": "0.3.1",
  "plugins": [
    {
      "type": "flannel",
      "delegate": {
        "hairpinMode": true,
        "isDefaultGateway": true
      }
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": true
      }
    }
  ]
}
EOL

 "Failed to create pod sandbox: open /run/systemd/resolve/resolv.conf: no such file or directory"

解决办法

systemctl enable systemd-resolved.service
systemctl start systemd-resolved

 "failed to pull image \"registry.k8s.io/pause3.6:"

解决方法:

查看日志 journalctl -xeu kubelet
### 生成 containerd 的默认配置文件
containerd config default > /etc/containerd/config.toml 
### 查看 sandbox 的默认镜像仓库在文件中的第几行 
cat /etc/containerd/config.toml | grep -n "sandbox_image"  
### 使用 vim 编辑器 定位到 sandbox_image,将 仓库地址修改成 k8simage/pause:3.6
vim /etc/containerd/config.toml  
sandbox_image = "k8simage/pause:3.6"  
### 重启 containerd 服务  
systemctl daemon-reload  
systemctl restart containerd 

操作时发现当前用户不是 kubernetes-admin@kubernetes, "Error from server (Forbidden): pods "kube-proxy-zvkbq" is forbidden: User "system:node:k8s-master-1" cannot get resource "pods/log" in API group "" in the namespace "kube-system"

export KUBECONFIG=/etc/kubernetes/admin.conf

无厘头的 kube-flannel-ds-xxx 或 kube-proxy-xxx 出现 CrashLooppBackOff, 要检查安装containerd时是否有修改cgroup Driver为systemd

sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml

4 相关命令

kubeadm token list  # 查看 tokens
kubeadm token create # 重新生成Token
kubeadm init phase upload-certs --upload-certs # 重新生成证书key
kubeadm certs check-expiration # 查看各证书过期时间
kubeadm certs renew {CERTIFICATE} # 更新xx证书,如etcd-server
kubeadm reset -f # 重置

kubectl cluster-info # 获取集群信息
kubectl logs -n kube-system kube-proxy-zvkbq {pod name} #查日志
kubectl auth can-i create namespace # 查询是否有权做某事

kubectl get nodes 节点列表
kubectl describe node k8s-node-1 # 查看节点k8s-node-1
kubectl describe nodes # 查看所有节点详细

kubectl get pods -o wide -A # 查看所有pods
kubectl get ingress,job -A
kubectl get deployments,services,pods -o wide #查看 Deployment,Services 和 Pods
kubectl get role -n kube-public
kubectl get sc  #查看StorageClass

# 查看default下的pods及列出所含容器清单
kubectl get pods -n default -o jsonpath='{range .items[*]}{"\n"}{.metadata.name}{"\t"}{.metadata.namespace}{"\t"}{range .spec.containers[*]}{.name}{"=>"}{.image}{","}{end}{end}'|sort|column -t

kubectl patch sc nfs-client -p '{"metadata": {"annotations": {"storageclass.beta.kubernetes.io/is-default-class": "true"}}}' #修改某SC为默认标识举例

kubectl config current-context # 查看当前上下文(用户)
kubectl config view # 查看配置
kubectl config get-contexts # 上下文(用户)列表

kubectl exec -it -n <namespace> <pod name> -- bash 进入容器单元
kubectl api-resources  #显示服务器支持的 API 资源

kubectl describe pod -n kube-system <pod name>  # 查看pod
kubectl describe node k8s-master-1 | grep Taints  #查看当前节点污点值

kubectl taint node k8s-master-1 node-role.kubernetes.io/master:NoSchedule- #节点删除污点
kubectl taint node k8s-master-1 node-role.kubernetes.io/master=: #节点设置污点

kubectl delete job -n<namespace> <job name>
kubectl delete -f path/examle.yaml    # 删除文件部署
kubectl delete deployment <name>  #删除部署
kubectl delete service <name>  #删除服务 
kubectl delete node <node name>
kubectl delete pod -n <namespace>  <pod  name>

kubectl delete pod -n <namespace> -l app=nginx 删除条件为app=nginx的pods
kubectl delete all  --all -n <namespace>

kubectl label node k8s-master test23_env=prod #给节点打标签
kubectl label node k8s-master test123_env- #删除标签

kubectl port-forward --namespace default svc/my-release-mariadb-galera 3306:3306 --address 127.0.0.1,192.168.0.106 & # 端口转发到服务端口

# 临时转发服务端口测试, 192.168.0.106本机内网ip
kubectl --namespace tidb-cluster port-forward svc/basic-prometheus 9090:9090 --address 127.0.0.1,192.168.0.106

# pods 扩容 / 缩容 (暂停设replicas为0)
# Deployment
kubectl scale --replicas=3 deployment/demo-deployment -n <namespace>
# ReplicaSet
kubectl scale --replicas=3 rs/demo-replicaset -n <namespace>
# ReplicationController
kubectl scale --replicas=3 rc/demo-replicationcontroller -n <namespace>
# StatefulSet
kubectl scale --replicas=3 statefulset/demo-statefulset -n <namespace>
kubectl scale --replicas=0 statefulset/demo-statefulset -n <namespace> 暂停

journalctl -f -u kubelet    # 查看kubelet日志

5 Tips

  • 在树莓派中,最好还是装个proxychains,科学找源,避免找不到或解析问题
  • 关于/etc/kubernetes/ 目录下的四个文件,其作用是:
    admin.conf kubectl与apiServer打交道的文件
    controller-manager.conf controllerManager与apiServer打交道的文件
    kubelet.conf kubelet与apiServer打交道的文件
    scheduler.conf scheduler与apiServer打交道的文件
  • 如果希望非root用户也能有k8s admin能力,则:
    # copy管理文件
    mkdir -p $HOME/.kube && \
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config && \
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    # 加入用户启动配置
    echo "export KUBECONFIG=$HOME/.kube/config" >> ~/.profile
    source ~/.profile
  •  CPU:常用单位为毫核(m或milli),或者直接是核(没有单位,如1核直接表示为1),换算关系为:1个核=1000m,当然也可以使用占比来表示,如:1/4个核=0.25,半个核=0.5,1个整核=1。

5.1 Pod状态解释

CrashLoopBackOff                                容器退出,kubelet正在将它重启
InvalidImageName                                无法解析镜像名称
ImageInspectError                                无法校验镜像
ErrImageNeverPul                                策略禁止拉取镜像
ImagePullBackOff                                 正在重试拉取
RegistryUnavailable                              连接不到镜像中心
ErrImagePull                                         通用的拉取镜像出错
CreateContainerConfigError                 不能创建kubelet使用的容器配置
CreateContainerError                           创建容器失败
m.internalLifecycle.PreStartContainer  执行hook报错
RunContainerError                                启动容器失败
PostStartHookError                               执行hook报错
ContainersNotInitialized                        容器没有初始化完毕
ContainersNotReady                            容器没有准备完毕
ContainerCreating                                容器创建中
PodInitializing                                       pod 初始化中
DockerDaemonNotReady                    docker还没有完全启动
NetworkPluginNotReady                      网络插件还没有完全启动
Evicted                                                 即驱赶的意思,意思是当节点出现异常时,kubernetes将有相应的机制驱赶该节点上的Pod。 多见于资源不足时导致的驱赶。

5.2 Pod的代理访问设置

修改对应的 deployment.yaml,如:

      containers:
        - name: jenkins
          image: jenkins/jenkins:lts 
          env:
          - name: http_proxy
            value: http://192.168.0.108:1081
          - name: https_proxy
            value: http://192.168.0.108:1081
          - name: no_proxy
            value: aliyun.com,aliyuncs.com,huaweicloud.com,k8s-master-0,k8s-master-1,k8s-worker-0,localhost,127.0.0.1,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16

5.3 DockerHub访问密钥设置 

参考:Pull an Image from a Private Registry | Kubernetes

kubectl create secret generic regcred \
    --from-file=.dockerconfigjson={docker访问配置路径,如:/home/orangepi/.docker/config.json} \
    --type=kubernetes.io/dockerconfigjson

6 参考

https://github.com/hub-kubernetes/kubeadm-multi-master-setup

使用树莓派搭建K8S集群(ARM64架构,附安装脚本)_树莓派集群_NaclChan的博客-CSDN博客

Creating a cluster with kubeadm | Kubernetes

Kubernetes安装与踩坑_--apiserver-advertise-address___walden的博客-CSDN博客

使用Kubeadm(1.13+)快速搭建Kubernetes集群

k8s笔记17--ubuntu & k8s 开启 swap功能_k8s swap_昕光xg的博客-CSDN博客

  • 4
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

bennybi

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值