一、更换软件源
- 先按照博客JETSON-Nano刷机运行deepstream4.0的demo刷系统
- 更换清华的源
sudo cp /etc/apt/sources.list /etc/apt/sources.list.bak #为防止误操作后无法恢复,先备份原文件sources.list
sudo gedit /etc/apt/sources.list
删除文件所有内容,复制下面的内容:
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic main multiverse restricted universe
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-security main multiverse restricted universe
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-updates main multiverse restricted universe
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-backports main multiverse restricted universe
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic main multiverse restricted universe
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-security main multiverse restricted universe
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-updates main multiverse restricted universe
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-backports main multiverse restricted universe
保存sources.list
二、更新系统
- sudo apt-get update
- sudo apt-get upgrade
- CUDA的路径写入环境变量
sudo gedit ~/.bashrc
在最后添加
export CUDA_HOME=/usr/local/cuda-10.0
export LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64:$LD_LIBRARY_PATH
export PATH=/usr/local/cuda-10.0/bin:$PATH
- 保存退出。然后 source ~/.bashrc
- nvcc -V 可以看到CUDA版本
三、TensorFlow-GPU版的安装
- python3 -m pip install --upgrade pip
- sudo apt-get install python3-pip libhdf5-serial-dev hdf5-tools
- pip3 install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu==1.13.1+nv19.5 --user
- pip3 install numpy pycuda --user
四、目标检测运行
- 下载TensorFlow的目标检测模型TensorFlow model zoo
这里选择ssd_mobilenet_v2_coco
ssd_mobilenet_v2_coco_2018_03_29.tar.gz - 下载TRT目标检测程序
- TRT_object_detection-master.zip拷贝到nano系统里,然后解压。
- cd TRT_object_detection
- mkdir model
- 解压 ssd_mobilenet_v2_coco_2018_03_29.tar.gz
- 拷贝ssd_mobilenet_v2_coco_2018_03_29文件夹里的frozen_inference_graph.pb文件到model文件夹里。
- gedit /usr/lib/python3.6/dist-packages/graphsurgeon/node_manipulation.py
node = NodeDef()
node.name = name
node.op = op if op else name
+ node.attr["dtype"].type = 1
for key, val in kwargs.items():
if key == "dtype":
node.attr["dtype"].type = val.as_datatype_enum
- 编辑TRT_object_detection/config里的model_ssd_mobilenet_v2_coco_2018_03_29.py
import graphsurgeon as gs
- path = 'model/ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb'
+ path = 'model/frozen_inference_graph.pb'
TRTbin = 'TRT_ssd_mobilenet_v2_coco_2018_03_29.bin'
- python3 main.py image.jpg
上图识别出了person