【Jetson+Azure】利用两个应用初识Azure+jetson nano

利用两个应用初识Azure

一. bing搜索图片服务

  1. 进入主页
  2. 创建资源 – > 搜索 bing serach --> 选择 bing search v7
    在这里插入图片描述
    第一次用可以创建资源组:
    在中国的话资源组位置选择:
    在这里插入图片描述
    提交 – > 转到资源 --> 点击管理秘钥:
    在这里插入图片描述
    进入后复制秘钥,秘钥很关键~~
  3. 将下代码中API_KEY换成上一步复制的Key,运行以下代码,功能为:
    用‘people mask covid’为关键词在https://api.bing.microsoft.com/v7.0/images/search网站搜索,下载前50张图。
    由于网络原因,有部分图片会下载失败,记得清理。
from requests import exceptions
import requests
import cv2
import os
import gevent

searchengine = 'people mask covid'
output = 'img/mask'

API_KEY = ""  ##your key
MAX_RESULTS = 50
GROUP_SIZE = 250
URL = "https://api.bing.microsoft.com/v7.0/images/search"
       

EXCEPTIONS = {IOError, FileNotFoundError, exceptions.RequestException, exceptions.HTTPError, exceptions.ConnectionError,
              exceptions.Timeout}

term = searchengine
headers = {"Ocp-Apim-Subscription-Key": API_KEY}
params = {"q": term, "offset": 0, "count": GROUP_SIZE}
print("[INFO] searching Bing API for '{}'".format(term))
search = requests.get(URL, headers=headers, params=params)
search.raise_for_status()
total = 0
results = search.json()
estNumResults = min(results["totalEstimatedMatches"], MAX_RESULTS)
print("[INFO] {} total results for '{}'".format(estNumResults, term))
def grab_page(url, ext, total):

    try:
        print("[INFO] fetching: {}".format(url))
        r = requests.get(url, timeout=30)
        p = os.path.sep.join([output, "{}{}".format(
            str(total).zfill(8), ext)])

        f = open(p, "wb")
        f.write(r.content)
        f.close()

        image = cv2.imread(p)

        if image is None:
            print("[INFO] deleting: {}".format(p))
            os.remove(p)
            return

    except Exception as e:
        if type(e) in EXCEPTIONS:
            print("[INFO] skipping: {}".format(url))
            return

for offset in range(0, estNumResults, GROUP_SIZE):
    print("[INFO] making request for group {}-{} of {}...".format(
        offset, offset + GROUP_SIZE, estNumResults))
    params["offset"] = offset
    search = requests.get(URL, headers=headers, params=params)
    search.raise_for_status()
    results = search.json()
    print("[INFO] saving images for group {}-{} of {}...".format(
        offset, offset + GROUP_SIZE, estNumResults))
    jobs = []
    for v in results["value"]:
        total += 1
        ext = v["contentUrl"][v["contentUrl"].rfind("."):]
        url = v["contentUrl"]
        
        jobs.append(gevent.spawn(grab_page, url, ext, total))

    gevent.joinall(jobs, timeout=10)
    print(total)

二. 利用Asure零代码训练、测试并在jetson nano上运行口罩识别网络。

简单来说就是利用Asure训练并生成.onnx模型,然后在nano上导入onnx模型进行推理。

  1. 创建资源 --> 搜索自定义视觉(custom vision)-- 创建
    在这里插入图片描述
  2. 审阅并创建

在这里插入图片描述
点击自定义视觉门户,或者通过浏览器进入https://www.customvision.ai/
3. 新建project – > 按照以下设置
在这里插入图片描述
注意,不要选到S1了。
4. 创建 – > 添加训练图片,即上一步搜索到的戴口罩的图片 --> 单击点开图片,鼠标框选进行标注 – > 标注完后点击右上角绿色按钮train --> 根据需要选择(本例中我选的quick train) --> train后的模型效果:
在这里插入图片描述
5. 点击train右边quickk test,导入图片进行测试,查看效果
6. 点击Export,导出为onnx。
7. 将导出文件中的 .onnx文件和labels.txt放到nano中要运行的目录下
8. 运行以下代码:

import os
import sys
import onnxruntime
import onnx
import numpy as np
from PIL import Image, ImageDraw
from object_detection import ObjectDetection
import tempfile
import cv2
import matplotlib.pyplot as plt
import onnxruntime
import onnx
import numpy as np
ModelFile ='model.onnx'
ImageFile= 'demo.jpg'
with open('labels_new.txt', 'r') as f:
        labels = [l.strip() for l in f.readlines()]
print(labels)
class ONNXRuntimeObjectDetection(ObjectDetection):
    """Object Detection class for ONNX Runtime"""
    def __init__(self, model_filename, labels):
        super(ONNXRuntimeObjectDetection, self).__init__(labels)
        model = onnx.load(model_filename)
        with tempfile.TemporaryDirectory() as dirpath:
            temp = os.path.join(dirpath, os.path.basename(ModelFile))
            model.graph.input[0].type.tensor_type.shape.dim[-1].dim_param = 'dim1'
            model.graph.input[0].type.tensor_type.shape.dim[-2].dim_param = 'dim2'
            onnx.save(model, temp)
            self.session = onnxruntime.InferenceSession(temp,providers=['TensorrtExecutionProvider'])
        self.input_name = self.session.get_inputs()[0].name
        self.is_fp16 = self.session.get_inputs()[0].type == 'tensor(float16)'
        
    def predict(self, preprocessed_image):
        inputs = np.array(preprocessed_image, dtype=np.float32)[np.newaxis,:,:,(2,1,0)] # RGB -> BGR
        inputs = np.ascontiguousarray(np.rollaxis(inputs, 3, 1))

        if self.is_fp16:
            inputs = inputs.astype(np.float16)

        outputs = self.session.run(None, {self.input_name: inputs})
        return np.squeeze(outputs).transpose((1,2,0)).astype(np.float32)
imcv = cv2.imread(ImageFile)
showImage = plt.imshow(imcv)
od_model = ONNXRuntimeObjectDetection(ModelFile, labels)
image = Image.open(ImageFile)
predictions = od_model.predict_image(image)
print(predictions)
height, width, channels = imcv.shape
print(height)
print(width)
length = len(predictions) 
for i in range(length): 
    left1 = predictions[i]['boundingBox']['left']
    top1 = predictions[i]['boundingBox']['top']
    width1 = predictions[i]['boundingBox']['width']
    height1 = predictions[i]['boundingBox']['height']
    pre = predictions[i]['probability']
    if(pre > 0.5 ):
        left= (int)(left1 * width)
        top = (int)(top1 * height)
        w = (int)(width1 * width + left)
        h = (int)(height1 * height + top)
        cv2.rectangle(imcv,(left,top ),(w,h),(100,0,0),3 )
imgplot = plt.imshow(imcv)

查看结果

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值