jetson-nano安装TensorFlow并运行目标检测

一、更换软件源

  1. 先按照博客JETSON-Nano刷机运行deepstream4.0的demo刷系统
  2. 更换清华的源
    sudo cp /etc/apt/sources.list /etc/apt/sources.list.bak #为防止误操作后无法恢复,先备份原文件sources.list
    sudo gedit /etc/apt/sources.list
    删除文件所有内容,复制下面的内容:

deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic main multiverse restricted universe
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-security main multiverse restricted universe
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-updates main multiverse restricted universe
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-backports main multiverse restricted universe
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic main multiverse restricted universe
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-security main multiverse restricted universe
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-updates main multiverse restricted universe
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-backports main multiverse restricted universe

保存sources.list

二、更新系统

  1. sudo apt-get update
  2. sudo apt-get upgrade
  3. CUDA的路径写入环境变量
    sudo gedit ~/.bashrc
    在最后添加
export CUDA_HOME=/usr/local/cuda-10.0
export LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64:$LD_LIBRARY_PATH
export PATH=/usr/local/cuda-10.0/bin:$PATH
  1. 保存退出。然后 source ~/.bashrc
  2. nvcc -V 可以看到CUDA版本

三、TensorFlow-GPU版的安装

  1. python3 -m pip install --upgrade pip
  2. sudo apt-get install python3-pip libhdf5-serial-dev hdf5-tools
  3. pip3 install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu==1.13.1+nv19.5 --user
  4. pip3 install numpy pycuda --user

四、目标检测运行

  1. 下载TensorFlow的目标检测模型TensorFlow model zoo
    这里选择ssd_mobilenet_v2_coco
    ssd_mobilenet_v2_coco_2018_03_29.tar.gz
  2. 下载TRT目标检测程序
  3. TRT_object_detection-master.zip拷贝到nano系统里,然后解压。
  4. cd TRT_object_detection
  5. mkdir model
  6. 解压 ssd_mobilenet_v2_coco_2018_03_29.tar.gz
  7. 拷贝ssd_mobilenet_v2_coco_2018_03_29文件夹里的frozen_inference_graph.pb文件到model文件夹里。
  8. gedit /usr/lib/python3.6/dist-packages/graphsurgeon/node_manipulation.py
     node = NodeDef()
     node.name = name
     node.op = op if op else name
+    node.attr["dtype"].type = 1
     for key, val in kwargs.items():
         if key == "dtype":
             node.attr["dtype"].type = val.as_datatype_enum
  1. 编辑TRT_object_detection/config里的model_ssd_mobilenet_v2_coco_2018_03_29.py
import graphsurgeon as gs

-  path = 'model/ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb'
+  path = 'model/frozen_inference_graph.pb'
   TRTbin = 'TRT_ssd_mobilenet_v2_coco_2018_03_29.bin'
  1. python3 main.py image.jpg
    在这里插入图片描述
    上图识别出了person
  • 2
    点赞
  • 14
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值