在Ubantu24.04上安装kubenates(k8s) 1.30

在Ubantu24.04上安装kubenates(k8s) 1.30

1.更新 apt 包索引并安装使用 Kubernetes apt 仓库所需要的包:

sudo apt-get update

2.下载用于 Kubernetes 软件包仓库的公共签名密钥。所有仓库都使用相同的签名密钥,因此你可以忽略URL中的版本:

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

3.添加 Kubernetes apt 仓库。

# 此操作会覆盖 /etc/apt/sources.list.d/kubernetes.list 中现存的所有配置。
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

4.更新 apt 包索引,安装 kubelet、kubeadm 和 kubectl,并锁定其版本:

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

5.关闭系统交换区,不关闭无法启动kubenates

swapoff -a

6.安装containerd,如果已经装了就跳过此步骤

sudo apt-get install containerd
# 启动
sudo systemctl start containerd
# 查看状态(是否启动)
sudo systemctl status containerd

7.确定docker是否正常

sudo systemctl status docker

7.1改docker配置文件

假如没有这个文件夹的,自行创建

sudo mkdir /etc/docker

创建好之后编辑

sudo vim /etc/docker/daemon.json

插入如下内容,将docker默认的cgroupfs修改为systemd

{
  "exec-opts": ["native.cgroupdriver=systemd"]
}

7.2 重启docker

sudo systemctl restart docker
# kubelet也可以重启一下
systemctl restart kubelet

8.创建k8s默认的yaml配置文件

sudo kubeadm config print init-defaults > init.default.yaml
# 如果没有权限,需要增加文件夹的读写权限如下
sudo chmod 777 文件夹

9.配置init.default.yaml

vim init.default.yaml

9.1去改这个文件的镜像地址,保存退出

1.2.3.4改为你自己的IP地址,注意nodeRegistration对应的值,假如是master,
输入命令 vim /etc/hosts
然后输入 你自己的IP master

imageRepository: registry.aliyuncs.com/google_containers
advertiseAddress: 1.2.3.4

10.执行以下命令

kubeadm config images pull --config=init.default.yaml

这时我们需要使用到的镜像,出现以下表示拉取完成

[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.30.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.30.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.30.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.30.0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.11.1
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.12-0

11.执行命令,指定镜像避免连接超时

sudo kubeadm init --image-repository=registry.aliyuncs.com/google_containers

成功!!!!!!Success

12.别慌,继续执行以下命令

# 如果不是管理员
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 如果是管理员
export KUBECONFIG=/etc/kubernetes/admin.conf
source /etc/profile
# 记住这个命令不用执行
kubeadm join 192.168.67.128:6443 --token pevxnb.k75nwsq9c8j6hi19 \
	--discovery-token-ca-cert-hash sha256:e933f3e00dcc3a048205a15c78cc5e43e07907d9b396b32b8968f614bc8e7085
# 如果忘记了命令,可以这样查看
kubeadm token create --print-join-command
# 输出
kubeadm join 192.168.67.128:6443 --token fv0ocw.jjha16yy77qfpegc --discovery-token-ca-cert-hash sha256:5bb18f3753cb782e7301f8efb3bc056ed93a5940f29661a73c4f7dd75a0d4703 

在这里插入图片描述
来试试命令

kubectl get svc
# 输出
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   5m46s

OVER

可能遇到的异常

稍等片刻之后,奇怪的是K8S挂掉了或者是有几个节点没有启动
报错信息——couldn’t get current server API group list: Get “http://localhost:8080/api?timeout=32s”: dial tcp 127.0.0.1:8080: connect: connection refused
或者是network plugin is not ready: cni config uninitialized
或者是flannel相关的错误
通过以下命令查看

kubectl get po -n kube-flannel
# 输出 flannel没有起来
NAME                    READY   STATUS             RESTARTS        AGE
kube-flannel-ds-q99fm   0/1     CrashLoopBackOff   5 (2m18s ago)   5m25s

很明显服务没有启动,查看报错信息:journalctl -f -u kubelet | grep error发现是和cni/bin/flannel相关的,经过我好多天的研究,最后通过重新安装flannel得到解决,累累的。贴个插件地址https://github.com/flannel-io/flannel

接下来,我们通过yml的形式重新启动新的flannel pod,原文提到的地址https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml,里面的三个镜像全是在docker.io上的,根本下载不下来。
然后通过科学上网,我把这些镜像放到阿里云了!

——————重点开始——
① 拉这两个镜像

docker pull registry.cn-hangzhou.aliyuncs.com/zhangjinbo/flannel-cni-plugin:1.4.1-flannel1
docker pull registry.cn-hangzhou.aliyuncs.com/zhangjinbo/flannel:0.25.3

② 创建yml文件,以yml的形式创建pod

vim kube-flannel.yml

贴下面的内容

apiVersion: v1
kind: Namespace
metadata:
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
  name: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-flannel
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - clustercidrs
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
kind: ConfigMap
metadata:
  labels:
    app: flannel
    k8s-app: flannel
    tier: node
  name: kube-flannel-cfg
  namespace: kube-flannel
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: flannel
    k8s-app: flannel
    tier: node
  name: kube-flannel-ds
  namespace: kube-flannel
spec:
  selector:
    matchLabels:
      app: flannel
      k8s-app: flannel
  template:
    metadata:
      labels:
        app: flannel
        k8s-app: flannel
        tier: node
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      containers:
      - args:
        - --ip-masq
        - --kube-subnet-mgr
        command:
        - /opt/bin/flanneld
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        image: registry.cn-hangzhou.aliyuncs.com/zhangjinbo/flannel:0.25.3
        name: kube-flannel
        resources:
          requests:
            cpu: 100m
            memory: 50Mi
        securityContext:
          capabilities:
            add:
            - NET_ADMIN
            - NET_RAW
          privileged: false
        volumeMounts:
        - mountPath: /run/flannel
          name: run
        - mountPath: /etc/kube-flannel/
          name: flannel-cfg
        - mountPath: /run/xtables.lock
          name: xtables-lock
      hostNetwork: true
      initContainers:
      - args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        command:
        - cp
        image: registry.cn-hangzhou.aliyuncs.com/zhangjinbo/flannel-cni-plugin:1.4.1-flannel1
        name: install-cni-plugin
        volumeMounts:
        - mountPath: /opt/cni/bin
          name: cni-plugin
      - args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        command:
        - cp
        image: registry.cn-hangzhou.aliyuncs.com/zhangjinbo/flannel:0.25.3
        name: install-cni
        volumeMounts:
        - mountPath: /etc/cni/net.d
          name: cni
        - mountPath: /etc/kube-flannel/
          name: flannel-cfg
      priorityClassName: system-node-critical
      serviceAccountName: flannel
      tolerations:
      - effect: NoSchedule
        operator: Exists
      volumes:
      - hostPath:
          path: /run/flannel
        name: run
      - hostPath:
          path: /opt/cni/bin
        name: cni-plugin
      - hostPath:
          path: /etc/cni/net.d
        name: cni
      - configMap:
          name: kube-flannel-cfg
        name: flannel-cfg
      - hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
        name: xtables-lock


③ 然后

kubectl apply -f kube-flannel.yml

# 运行完毕之后查看po
kubectl get po -n kube-flannel
NAME                    READY   STATUS    RESTARTS   AGE
kube-flannel-ds-jfxt4   0/1     XXXXX错误   0          36m

看看你的kube-flannel-ds-xxxxx是什么状态,如果不是READY,检查一下这个po的运行情况
xxxxx用你机子上的替换
kubectl logs kube-flannel-ds-xxxxx-n kube-flannel
有一种可能是会出现pod cidr not assgned,这是我们未给节点分配IP导致

kubectl get nodes
# 输出
NAME                      STATUS   ROLES           AGE    VERSION
izbp15esj07w8y851x1qexz   NotReady    control-plane   179m   v1.28.10
# 配置IP
kubectl edit node izbp15esj07w8y851x1qexz
# 加上这个,保存退出,注意若有多个podCIDR不能出现重复
spec:
  podCIDR: 10.244.0.0/24
# 删掉之前配置的
kubectl delete -f kube-flannel.yml
# 重新创建
kubectl apply -f kube-flannel.yml 

不出意外,已经部署成功了
(●’◡’●)(●’◡’●)(●’◡’●)(●’◡’●)(●’◡’●)
检查一下

kubectl get po -n kube-flannel
# 输出
NAME                    READY   STATUS    RESTARTS   AGE
kube-flannel-ds-jfxt4   1/1     Running   0          28m

kubectl get po -n kube-system
# 输出
NAME                                              READY   STATUS    RESTARTS   AGE
coredns-66f779496c-bg7pw                          1/1     Running   0          3h6m
coredns-66f779496c-zfxqs                          1/1     Running   0          3h6m
etcd-izbp15esj07w8y851x1qexz                      1/1     Running   0          3h6m
kube-apiserver-izbp15esj07w8y851x1qexz            1/1     Running   0          3h6m
kube-controller-manager-izbp15esj07w8y851x1qexz   1/1     Running   0          3h6m
kube-proxy-ns9jg                                  1/1     Running   0          3h6m
kube-scheduler-izbp15esj07w8y851x1qexz            1/1     Running   0          3h6m

完结。。。。。。。。。。。。。。。。

要在Ubuntu 18.04上安装Kubernetes,您可以遵循以下步骤: 1. 安装Docker:Kubernetes需要使用Docker来运行容器。您可以使用以下命令在Ubuntu 18.04上安装Docker: ``` sudo apt update sudo apt install docker.io ``` 2. 添加Kubernetes存储库:使用以下命令将Kubernetes存储库添加到您的Ubuntu系统中: ``` sudo apt-get update && sudo apt-get install -y apt-transport-https curl curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list ``` 3. 安装Kubernetes:使用以下命令安装Kubernetes: ``` sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl ``` 4. 初始化Kubernetes Master节点:使用以下命令初始化Kubernetes Master节点: ``` sudo kubeadm init ``` 5. 安装Kubernetes网络插件:安装Kubernetes网络插件以实现Pod之间的网络通信。您可以使用以下命令安装Calico网络插件: ``` kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml ``` 6. 加入Kubernetes节点:要将其他节点加入Kubernetes集群,请在每个节点上运行以下命令: ``` sudo kubeadm join <MASTER_NODE_IP>:<MASTER_NODE_PORT> --token <TOKEN> --discovery-token-ca-cert-hash <CERT_HASH> ``` 其中,`<MASTER_NODE_IP>`是Kubernetes Master节点的IP地址,`<MASTER_NODE_PORT>`是Kubernetes Master节点的端口号,`<TOKEN>`是Kubernetes Master节点为节点分配的令牌,`<CERT_HASH>`是Kubernetes Master节点的证书哈希。 以上就是在Ubuntu 18.04上安装Kubernetes的步骤。
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

你熬夜了吗?

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值