树莓派4B平台部署 kubeedge (cloudcore)+ isula

硬件环境:树莓派4b

系统环境:ubuntu-20.04.3-preinstalled-server-arm64+raspi.img.xz

初始密码: ubuntu/ubuntu

关闭ubuntu自动更新服务

ubuntu自动更新会在系统安装启动后一段时间内占用系统资源进行更新,长时间占用apt进程,影响本文档的后续安装操作,将/etc/apt/apt.conf.d/20auto-upgrades内容修改如下

--- a/20auto-upgrades	2022-01-05 07:15:30.989668829 +0000
+++ b/20auto-upgrades	2022-01-05 07:16:07.349672466 +0000
@@ -1,2 +1,4 @@
-APT::Periodic::Update-Package-Lists "1";
-APT::Periodic::Unattended-Upgrade "1";
+APT::Periodic::Update-Package-Lists "0";
+APT::Periodic::Download-Upgradeable-Packages "0";
+APT::Periodic::AutocleanInterval "0";
+APT::Periodic::Unattended-Upgrade "0";

打开cgroup_memory

安装完树莓派后修改内核启动命令,在 /boot/firmware/cmdline.txt 中添加以下内容

cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1

关闭ubuntu云服务,ubuntu的树莓派和riscv等边缘云镜像都带了这个服务,会增加启动时间和系统开销

touch /etc/cloud/cloud-init.disabled

打开cgroup_memory、打开cgroup_memory操作在重启树莓派后生效

切换到root用户进行部署

1、环境配置

永久关闭防火墙

systemctl stop firewalld && systemctl disable firewalld

关闭交换分区

swapoff -a && sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab

关闭selinux防火墙

setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/sysconfig/selinux

设置网络规则

cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 vm.swappiness=0 EOF modprobe br_netfilter sysctl -p /etc/sysctl.d/k8s.conf # 开机加载 echo "br_netfilter" > /etc/modules-load.d/br_netfilter.conf

2、安装容器运行时

安装iSulad

创建install.sh并写入以下内容然后执行即可编译安装isuald

(参考官方脚本: docs/install_iSulad_on_Ubuntu_20_04_LTS.sh · openEuler/iSulad - Gitee.com)

#!/bin/bash

set -x
set -e

# export LDFLAGS
echo 'export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig:$PKG_CONFIG_PATH' >> /etc/profile
echo 'export LD_LIBRARY_PATH=/usr/local/lib:/usr/lib:/lib/aarch64-linux-gnu/:$LD_LIBRARY_PATH' >> /etc/profile

source /etc/profile
     
echo "/usr/local/lib" >> /etc/ld.so.conf
apt update && apt install -y g++ libprotobuf-dev protobuf-compiler protobuf-compiler-grpc \
libgrpc++-dev libgrpc-dev libtool automake autoconf cmake make pkg-config libyajl-dev \
zlib1g-dev libselinux1-dev libseccomp-dev libcap-dev libsystemd-dev git libarchive-dev \
libcurl4-gnutls-dev openssl libdevmapper-dev python3 libtar0 libtar-dev libhttp-parser-dev \
libwebsockets-dev mosquitto golang

BUILD_DIR=/root/build_isula

rm -rf $BUILD_DIR
mkdir -p $BUILD_DIR

# build libevent
cd $BUILD_DIR
git clone https://gitee.com/src-openeuler/libevent.git
cd libevent
git checkout -b openEuler-20.03-LTS-tag openEuler-20.03-LTS-tag
tar -xzvf libevent-2.1.11-stable.tar.gz
cd libevent-2.1.11-stable && ./autogen.sh && ./configure
make -j $(nproc) 
make install
ldconfig

# build libevhtp
cd $BUILD_DIR
git clone https://gitee.com/src-openeuler/libevhtp.git
cd libevhtp && git checkout -b openEuler-20.03-LTS-tag openEuler-20.03-LTS-tag
tar -xzvf libevhtp-1.2.16.tar.gz
cd libevhtp-1.2.16
patch -p1 -F1 -s < ../0001-support-dynamic-threads.patch
patch -p1 -F1 -s < ../0002-close-openssl.patch
rm -rf build && mkdir build && cd build
cmake -D EVHTP_BUILD_SHARED=on -D EVHTP_DISABLE_SSL=on ../
make -j $(nproc)
make install 
ldconfig
                                                                                                                               
# build lxc
cd $BUILD_DIR
git clone https://gitee.com/src-openeuler/lxc.git
cd lxc
./apply-patches
cd lxc-4.0.3
./autogen.sh
./configure
make -j $(nproc)
make install

# build lcr
cd $BUILD_DIR
git clone https://gitee.com/openeuler/lcr.git
cd lcr
mkdir build
cd build
cmake ..
make -j $(nproc)
make install

# build and install clibcni
cd $BUILD_DIR
git clone https://gitee.com/openeuler/clibcni.git
cd clibcni
mkdir build
cd build
cmake ..
make -j $(nproc)
make install

# build and install iSulad
cd $BUILD_DIR
git clone https://gitee.com/openeuler/iSulad.git
cd iSulad
mkdir build
cd build
cmake ..
make -j $(nproc)
make install
ldconfig

修改 /etc/isulad/daemon.json 只列出修改的部分

    "registry-mirrors": ["docker.io"],
    "insecure-registries": ["k8s.gcr.io","quay.io","hub.oepkgs.net"],
    "pod-sandbox-image": "k8s.gcr.io/pause:3.2", 						# pause镜像设置
    "network-plugin": "cni",											# 置空表示禁用cni网络插件则下面两个路径失效,安装插件后重启isulad即可
    "cni-bin-dir": "/opt/cni/bin",
    "cni-conf-dir": "/etc/cni/net.d",

# 安装网络组件

(cni组件在后面通过apt安装kubernetes的过程中会作为依赖进行安装安装路径在/opt/cni/bin下,在这里不需要手动安装)

# 手动下cni 网络插件

#wget --no-check-certificate https://github.com/containernetworking/plugins/releases/download/v0.9.0/cni-plugins-linux-arm64-v0.9.0.tgz 

#mkdir -p /opt/cni/bin 

#tar -zxvf cni-plugins-linux-arm64-v0.9.0.tgz -C /opt/cni/bin

将isulad添加到systemd管理

修改/root/build_isula/iSulad/src/contrib/init/isulad.service 以下内容

ExecStart=/usr/local/bin/isulad $OPTIONS

复制isulad.service到/lib/systemd/system/并加载服务

cp /root/build_isula/iSulad/src/contrib/init/isulad.service /lib/systemd/system/ systemctl daemon-reload

# 启动isulad systemctl start isulad

# 查看状态 systemctl status isulad

# 开机启动 systemctl enable isulad

# 如果修改了isulad配置文件/etc/isulad/daemon.json,需要重启isulad服务才能生效

至此,iSulad容器环境安装完成

3、下载k8s镜像

at > pull_image.sh << EOF
#!/bin/bash

tag=v1.18.6
images=(
	kube-apiserver
	kube-scheduler
	kube-controller-manager
	kube-proxy
)

for i in ${images[@]}
do

	isula pull  mirrorgcrio/$i-arm64:$tag
	isula tag mirrorgcrio/$i-arm64:$tag k8s.gcr.io/$i:\$tag
done

if [ $? -eq 0 ];
then
	isula pull mirrorgcrio/etcd-arm64:3.4.3-0
	isula pull mirrorgcrio/pause-arm64:3.2
	isula pull coredns/coredns:coredns-arm64
	isula tag mirrorgcrio/pause-arm64:3.2 k8s.gcr.io/pause:3.2
	isula tag mirrorgcrio/etcd-arm64:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
	isula tag coredns/coredns:coredns-arm64 k8s.gcr.io/coredns:1.6.7

fi
EOF

#执行脚本
bash -x pull_image.sh 

4、部署k8s

安装k8s

echo "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" >> /etc/apt/sources.list.d/kubernetes.list
apt-get update
gpg --keyserver keyserver.ubuntu.com --recv-keys FEEA9169307EA071 8B57C5C2836F4BEB # 找到apt-get update报错信息中的KEY替换 8B57C5C2836F4BEB ,两个KEY的话用空格隔开
gpg --export --armor FEEA9169307EA071 8B57C5C2836F4BEB | sudo apt-key add -			# 替换 8B57C5C2836F4BEB,两个KEY的话用空格隔开
apt-get update 
apt-get install kubectl=1.18.6-00 kubeadm=1.18.6-00 kubelet=1.18.6-00	-y	# 安装kubernetes,cni组件会作为依赖安装

初始化k8s

kubeadm init --kubernetes-version=1.18.6 --apiserver-advertise-address=192.168.60.240  --pod-network-cidr=10.244.0.0/16 --upload-certs --cri-socket=/var/run/isulad.sock
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

--apiserver-advertise-address 当前作为云侧节点的ip地址

--pod-network-cidr 指定pod网络的ip地址范围

--upload-certs 自动为后续加入的节点分发证书

--cri-socket 指定容器运行环境,不指定默认找docker

--node-name 指定节点名

配置网络

# 下载flannel网络插件

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

# 准备云侧网络配置

cp kube-flannel.yml kube-flannel-cloud.yml

# 准备端测网路配置

cp kube-flannel.yml kube-flannel-edge.yml

修改云侧网络配置 kube-flannel-cloud.yml ,

--- a/kube-flannel-cloud.yml	2021-12-29 09:48:11.963863953 +0000
+++ b/kube-flannel-cloud.yml	2021-12-29 09:49:04.635376761 +0000
@@ -134,7 +134,7 @@
 apiVersion: apps/v1
 kind: DaemonSet
 metadata:
-  name: kube-flannel-ds
+  name: kube-flannel-cloud-ds
   namespace: kube-system
   labels:
     tier: node
@@ -158,6 +158,8 @@
              - key: kubernetes.io/os
                operator: In
                values:
                - linux

+             - key: node-role.kubernetes.io/agent   # 注意缩进
+               operator: DoesNotExist
       hostNetwork: true
       priorityClassName: system-node-critical
       tolerations:

修改边侧网络配置 kube-flannel-edge.yml

--- a/kube-flannel-edge.yml	2021-12-29 09:48:11.963863953 +0000
+++ b/kube-flannel-edge.yml	2021-12-30 02:29:37.772475294 +0000
@@ -134,7 +134,7 @@
 apiVersion: apps/v1
 kind: DaemonSet
 metadata:
-  name: kube-flannel-ds
+  name: kube-flannel-edge-ds
   namespace: kube-system
   labels:
     tier: node
@@ -158,6 +158,8 @@
               - key: kubernetes.io/os
                 operator: In
                 values:
                 - linux
+              - key: node-role.kubernetes.io/agent   # 注意缩进
+                operator: Exists
       hostNetwork: true
       priorityClassName: system-node-critical
       tolerations:
@@ -197,6 +199,7 @@
         args:
         - --ip-masq
         - --kube-subnet-mgr
+        - --kube-api-url=http://127.0.0.1:10550		# 这里的--kube-api-url为边侧edgecore监听地址
         resources:
           requests:
             cpu: "100m"

应用网络插件

kubectl apply -f kube-flannel-cloud.yml

kubectl apply -f kube-flannel-edge.yml

禁止在边侧部署kube-proxy

kubectl edit ds kube-proxy -n kube-system

改动如下

--- a/kubectl-edit-o08mp.yaml	2022-01-04 01:48:29.692925515 +0000
+++ b/kubectl-edit-o08mp.yaml	2022-01-04 01:46:47.785903347 +0000
@@ -148,6 +148,13 @@
 spec:
   revisionHistoryLimit: 10
   selector:
     matchLabels:
       k8s-app: kube-proxy
   template:
     metadata:
       creationTimestamp: null
       labels:
         k8s-app: kube-proxy
     spec:
+      affinity:		#注意缩进
+        nodeAffinity:
+          requiredDuringSchedulingIgnoredDuringExecution:
+            nodeSelectorTerms:
+            - matchExpressions:
+              - key: node-role.kubernetes.io/agent
+                operator: DoesNotExist
       containers:
       - command:
         - /usr/local/bin/kube-proxy
         - --config=/var/lib/kube-proxy/config.conf
         - --hostname-override=$(NODE_NAME)
       

5、部署kubeedge

安装go环境

wget https://go.dev/dl/go1.17.linux-arm64.tar.gz 
tar -zxvf go1.17.linux-arm64.tar.gz -C /usr/local/ 
echo 'export GOPATH=/usr/local/kubeedge' >> /etc/profile 
echo 'export PATH=/usr/local/go/bin:$PATH' >> /etc/profile 
source /etc/profile

下载kubueedge

git clone https://gitee.com/mirrors/kubeedge.git $GOPATH/src/github.com/kubeedge/kubeedge 
cd $GOPATH/src/github.com/kubeedge/kubeedge
git checkout v1.8.0

根据以下代码修改跳过checksum_kubeedge步骤,这是在对下载的kubeedge二进制包进行校验,由于是在线从github读校验码,如果不跳过会比较久或者卡死

file: keadm/cmd/keadm/app/cmd/util/common.go
@@ -38,7 +38,6 @@
 	"k8s.io/client-go/discovery"
 	"k8s.io/client-go/rest"
 	"k8s.io/client-go/tools/clientcmd"
-	"k8s.io/klog/v2"
 
 	types "github.com/kubeedge/kubeedge/keadm/cmd/keadm/app/cmd/common"
 	"github.com/kubeedge/kubeedge/pkg/apis/componentconfig/edgecore/v1alpha1"
@@ -261,7 +260,7 @@
 	filePath := fmt.Sprintf("%s/%s", options.TarballPath, filename)
 	if _, err = os.Stat(filePath); err == nil {
 		fmt.Printf("Expected or Default KubeEdge version %v is already downloaded and will checksum for it. \n", version)
-		if success, _ := checkSum(filename, checksumFilename, version, options.TarballPath); !success {
+		/*if success, _ := checkSum(filename, checksumFilename, version, options.TarballPath); !success {
 			fmt.Printf("%v in your path checksum failed and do you want to delete this file and try to download again? \n", filename)
 			for {
 				confirm, err := askForconfirm()
@@ -285,7 +284,7 @@
 			}
 		} else {
 			fmt.Println("Expected or Default KubeEdge version", version, "is already downloaded")
-		}
+		}*/
 	} else if !os.IsNotExist(err) {
 		return err
 	} else {

编译keadm

# 设置代理 go env -w GOPROXY=https://goproxy.cn,direct # 编译 make all WHAT=keadm cp _output/local/bin/keadm /usr/local/bin/

复制配置文件目录crds到/etc/kubeedg下,并初始化kubeedg

mkdir /etc/kubeedge
cp -r $GOPATH/src/github.com/kubeedge/kubeedge/build/crds /etc/kubeedge
cp -r $GOPATH/src/github.com/kubeedge/kubeedge/build/tools/* /etc/kubeedge
# 从certgen.sh生成证书,但要先设置云侧的IP到环境变量
export CLOUDCOREIPS="192.168.60.240"
cd /etc/kubeedge && ./certgen.sh stream
# 提前下载kubeedge-v1.8.0-linux-arm64.tar.gz到/etc/kubeedge 避免keadm init过程下
wget -P /root https://github.com/kubeedge/kubeedge/releases/download/v1.8.0/kubeedge-v1.8.0-linux-arm64.tar.gz
cp /root/kubeedge-v1.8.0-linux-arm64.tar.gz /etc/kubeedge
keadm init --advertise-address=192.168.60.240 --kubeedge-version=1.8.0 --kube-config=/root/.kube/config
# 杀掉当前cloudcore进程
pkill cloudcore
# 使用systemd管理kubeedge
cp /etc/kubeedge/cloudcore.service /lib/systemd/system/
systemctl daemon-reload
# 开机启动
systemctl enable cloudcore

配置cloudcore

vim /etc/kubeedge/config/cloudcore.yaml

修改如下,只列出修改部分

# 大概在第44行
cloudStream:
   enable: true
# 大概在第66行
dynamicController:

   enable: true    # 开启dynamicController以支持edgecore的listwatch功能

重启cloudcore服务

# 重启cloudcore

systemctl restart cloudcore

# 查看状态

systemctl status cloudcore

查看组件状态

kubectl get cs

会看到controller-manager和scheduler组件对应的10251和10252端口拒绝连接,因为这两个组件默认禁用了非安全端口,

需要分别打开这个两个组件的yaml配置文件删除 --port=0 配置项,等待一会再执行 kubectl get cs 查看

/etc/kubernetes/manifests/kube-controller-manager.yaml 
# controller-manager 配置文件 

/etc/kubernetes/manifests/kube-scheduler.yaml
 # scheduler 配置文件

至此,云侧部署完成。

kubectl的使用

#master获取所有节点信息

kubectl get node

#分配pod

kubectl run -i --tty test --image=ubuntu:20.04

#获取pod信息

kubectl get pods --all-namespaces

#测试

部署nginx应用

# KubeEdge提供了一个nginx模板,我们可以直接使用 kubectl apply -f $GOPATH/src/github.com/kubeedge/kubeedge/build/deployment.yaml # 输出:deployment.apps/nginx-deployment created # 查看是否部署到了边侧 kubectl get pod -A -owide | grep nginx # 输出:default nginx-deployment-c85df76f4-wdp8s 1/1 Running 0 9m55s 10.244.1.2 rpi4b <none> <none>

master pod图

如果发现kube-flannel-edge-xxx一直处于pending状态可尝试删掉该pod,它会重新创建

kubectl delete pod kube-flannel-edge-xxx --force -n kube-system

进入边侧访问nginx

# KubeEdge提供了一个nginx模板,我们可以直接使用
kubectl apply -f $GOPATH/src/github.com/kubeedge/kubeedge/build/deployment.yaml
# 输出:deployment.apps/nginx-deployment created

# 查看是否部署到了边侧
kubectl get pod -A -owide | grep nginx
# 输出:default       nginx-deployment-c85df76f4-wdp8s   1/1     Running             0          9m55s   10.244.1.2       rpi4b    <none>           <none>

删除创建的deployment以及pod

kubectl get pods kubectl get rs kubectl get deployment kubectl delete deployment nginx-deployment

强制删除pod

kubectl delete pod <pod名> --force -n <namespace>

查看pod日志

kubectl logs -f <pod名> -n <namespace>

查看pod事件

kubectl describe pod <pod名>

  • 0
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

路边闲人2

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值