sedna部署及Joint Inference样例演示

SeDna部署及Joint Inference样例演示

1、 简介

KubeEdge SIG AI致力于解决AI在边缘落地过程中的上述挑战,提升边缘AI的性能和效率。结合前期将边云协同机制运用在AI场景的探索,AI SIG成员联合发起了Sedna子项目,将最佳实践经验固化到该项目中。

Sedna基于KubeEdge提供的边云协同能力,实现AI的跨边云协同训练和协同推理能力,支持业界主流的AI框架,包括TensorFlow/Pytorch/PaddlePaddle/MindSpore等,支持现有AI类应用无缝下沉到边缘,快速实现跨边云的增量学习,联邦学习,协同推理等能力,最终达到降低成本、提升模型性能、保护数据隐私等效果。

https://github.com/kubeedge/sedna

2、master节点查看主节点名称

kubectl get node -o wide

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-vptcdRUA-1632832380096)(C:\Users\82691\AppData\Roaming\Typora\typora-user-images\image-20210928201259481.png)]

SEDNA_GM_NODE设为主节点名称

SEDNA_GM_NODE=master
curl https://raw.githubusercontent.com/kubeedge/sedna/main/scripts/installation/install.sh | SEDNA_GM_NODE=$SEDNA_GM_NODE SEDNA_ACTION=create bash -

若网络不良,下载到本地后执行

export SEDNA_ROOT=/opt/sedna
SEDNA_GM_NODE=master
curl https://raw.githubusercontent.com/kubeedge/sedna/main/scripts/installation/install.sh | SEDNA_GM_NODE=$SEDNA_GM_NODE SEDNA_ACTION=create bash -

# Check the GM status:
kubectl get deploy -n sedna gm
# Check the LC status:
kubectl get ds lc -n sedna
# Check the pod status:
kubectl get pod -n sedna

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-BRThyYGN-1632832380097)(C:\Users\82691\AppData\Roaming\Typora\typora-user-images\image-20210928202153096.png)]

3、edge安装kubectl

1.下载安装Kubectl

curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl

2.使kubectl二进制可执行文件

chmod +x ./kubectl

3.将二进制文件移到PATH中

sudo mv ./kubectl /usr/local/bin/kubectl

4.测试安装版本

kubectl version --client

【The connection to the server localhost:8080 was refused - did you specify the right host or port?】

将master节点下/etc/kubernetes/admin.conf复制到edge节点,配置环境变量

vim /etc/profile
export KUBECONFIG=/etc/kubernetes/admin.conf【修改地址】
source /etc/profile

4、准备镜像

注释掉其他实验Dockerfile,本案例master保留big,edge保留little

./home/edge/sedna/examples/build_image.sh

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-EdTDhaXe-1632832380098)(C:\Users\82691\AppData\Roaming\Typora\typora-user-images\image-20210928202331851.png)]

5、创建服务

Create Big Model Resource Object for Cloud

kubectl create -f - <<EOF
apiVersion: sedna.io/v1alpha1
kind:  Model
metadata:
  name: helmet-detection-inference-big-model
  namespace: default
spec:
  url: "/data/big-model/yolov3_darknet.pb"
  format: "pb"
EOF

Create Little Model Resource Object for Edge

kubectl create -f - <<EOF
apiVersion: sedna.io/v1alpha1
kind: Model
metadata:
  name: helmet-detection-inference-little-model
  namespace: default
spec:
  url: "/data/little-model/yolov3_resnet18.pb"
  format: "pb"
EOF

边缘端:

mkdir -p /joint_inference/output

Create JointInferenceService

[注]修改节点名字,以及镜像版本

CLOUD_NODE=“cloud-node-name”
EDGE_NODE=“edge-node-name”

kubectl create -f - <<EOF
apiVersion: sedna.io/v1alpha1
kind: JointInferenceService
metadata:
  name: helmet-detection-inference-example
  namespace: default
spec:
  edgeWorker:
    model:
      name: "helmet-detection-inference-little-model"
    hardExampleMining:
      name: "IBT"
      parameters:
        - key: "threshold_img"
          value: "0.9"
        - key: "threshold_box"
          value: "0.9"
    template:
      spec:
        nodeName: $EDGE_NODE
        containers:
        - image: kubeedge/sedna-example-joint-inference-helmet-detection-little:v0.4.0
          imagePullPolicy: IfNotPresent
          name:  little-model
          env:  # user defined environments
          - name: input_shape
            value: "416,736"
          - name: "video_url"
            value: "rtsp://localhost/video"
          - name: "all_examples_inference_output"
            value: "/data/output"
          - name: "hard_example_cloud_inference_output"
            value: "/data/hard_example_cloud_inference_output"
          - name: "hard_example_edge_inference_output"
            value: "/data/hard_example_edge_inference_output"
          resources:  # user defined resources
            requests:
              memory: 64M
              cpu: 100m
            limits:
              memory: 2Gi
          volumeMounts:
            - name: outputdir
              mountPath: /data/
        volumes:   # user defined volumes
          - name: outputdir
            hostPath:
              # user must create the directory in host
              path: /joint_inference/output
              type: Directory

  cloudWorker:
    model:
      name: "helmet-detection-inference-big-model"
    template:
      spec:
        nodeName: $CLOUD_NODE
        containers:
          - image: kubeedge/sedna-example-joint-inference-helmet-detection-big:v0.4.0
            name:  big-model
            imagePullPolicy: IfNotPresent
            env:  # user defined environments
              - name: "input_shape"
                value: "544,544"
            resources:  # user defined resources
              requests:
                memory: 2Gi
EOF

Check Joint Inference Status

kubectl get jointinferenceservices.sedna.io

Mock Video Stream for Inference in Edge Side

wget https://github.com/EasyDarwin/EasyDarwin/releases/download/v8.1.0/EasyDarwin-linux-8.1.0-1901141151.tar.gz
tar -zxvf EasyDarwin-linux-8.1.0-1901141151.tar.gz
cd EasyDarwin-linux-8.1.0-1901141151
./start.sh

mkdir -p /data/video
cd /data/video
wget https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/video.tar.gz
tar -zxvf video.tar.gz

ffmpeg -re -i /data/video/video.mp4 -vcodec libx264 -f rtsp rtsp://localhost/video

结果位于/joint_inference/output文件夹下。

  • 1
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 5
    评论
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值