【Jetson Orin Nano系统烧录教程】

9 篇文章 1 订阅
2 篇文章 0 订阅

在这里插入图片描述

1 系统烧录

1.1 方式1:使用SD卡烧录

1.1.1 相关下载

推荐在win系统下进行烧录,方便SD卡的格式化和镜像软件的烧录。

(1)SD卡格式化软件下载:https://www.sdcard.org/downloads/formatter/sd-memory-card-formatter-for-windows-download/

(2)镜像烧录软件下载:https://etcher.balena.io/#download-etcher

(3)jetpack5.1.2 镜像下载:https://developer.nvidia.com/embedded/jetpack-sdk-512

1.1.2 烧录步骤

先将SD卡进行格式化,然后将jetpack5.1.2镜像使用烧录软件实现系统烧录。

1.2 方式2:使用SDK Manager烧录

1.2.1 下载SDK Manager

官网链接:https://developer.nvidia.com/sdk-manager

  • 短接FC_REC和GND(第二三引脚),接上TypeC数据线,然后接通电源

学习文档:https://docs.nvidia.com/jetson/archives/r34.1/DeveloperGuide/text/HR/JetsonDeveloperKitSetup.html

1.2.2 烧录步骤

1、准备一台主机、待烧录的Jetson Orin Nano开发板、SD卡、数据线
2、在主机Ubuntu下载SDK Manager
3、通过数据线将主机和Orin 来连接,短接Nano(如果没有短接帽,使用钥匙也可以实现短接)
4、按照Manager流程烧录

2 Orin Nano开机缓慢问题

【!!!文中采用 关闭开机启动项的方式 解决开机缓慢问题,非必要别动!!不然 系统 会崩

一般情况下,采用 方式1 使用SD卡烧录系统的开机时间在一分钟以内,不需要解决开机缓慢问题】

2.1 查看开机启动项

systemd-analyze blame

会按照时间长短顺序显示服务启动项:

9.871s alsa-restore.service                
7.216s networkd-dispatcher.service         
6.057s nv.service                          
5.725s ModemManager.service                
3.450s dev-mmcblk1p1.device                
2.769s docker.service                      
2.096s apt-daily-upgrade.service           
2.062s accounts-daemon.service             
1.959s nv-l4t-usb-device-mode.service      
1.701s udisks2.service                     
1.613s avahi-daemon.service                
1.578s NetworkManager.service              
1.500s snapd.service                       
1.455s nv-l4t-bootloader-config.service    
1.381s polkit.service                      
1.279s apt-daily.service                   
1.248s containerd.service                  
1.183s apport.service                      
1.181s switcheroo-control.service          
1.110s wpa_supplicant.service              
1.099s ua-timer.service                    
1.093s systemd-logind.service              
 998ms resolvconf-pull-resolved.service    
 913ms nvpower.service                     
 845ms rsyslog.service                     
 710ms user@124.service                    
 680ms systemd-resolved.service            
 577ms binfmt-support.service              
 539ms systemd-modules-load.service        
 528ms nvpmodel.service                    
 515ms dev-hugepages.mount                 
 510ms dev-mqueue.mount                    
 502ms run-rpc_pipefs.mount                
 501ms kerneloops.service                  
 495ms systemd-udev-trigger.service        
 494ms sys-kernel-debug.mount              
 490ms sys-kernel-tracing.mount            
 467ms keyboard-setup.service              
 457ms kmod-static-nodes.service           
 442ms modprobe@chromeos_pstore.service    
 433ms nvphs.service                       
 426ms modprobe@efi_pstore.service         
 422ms modprobe@pstore_blk.service         
 419ms modprobe@ramoops.service            
 418ms modprobe@pstore_zone.service        
 415ms nvfb-udev.service                   
 413ms e2scrub_reap.service                
 392ms systemd-journald.service            
 336ms ssh.service                         
 325ms nvfb-early.service                  
 324ms nv_nvsciipc_init.service            
 321ms systemd-remount-fs.service          
 306ms bluetooth.service                   
 287ms gdm.service                         
 272ms pppd-dns.service                    
 253ms systemd-udevd.service               
 243ms systemd-timesyncd.service           
 209ms systemd-update-utmp.service         
 199ms systemd-random-seed.service         
 197ms systemd-tmpfiles-clean.service      
 193ms snapd.seeded.service                
 168ms user@1000.service                   
 164ms systemd-tmpfiles-setup-dev.service  
 162ms systemd-sysusers.service            
 159ms openvpn.service                     
 156ms console-setup.service               
 154ms motd-news.service                   
 153ms proc-sys-fs-binfmt_misc.mount       
 135ms plymouth-read-write.service         
 131ms systemd-tmpfiles-setup.service      
 126ms colord.service                      
 117ms upower.service                      
 115ms nfs-config.service                  
 103ms systemd-sysctl.service              
  89ms systemd-journal-flush.service       
  85ms docker.socket                       
  85ms rpcbind.service                     
  75ms systemd-user-sessions.service       
  74ms ubuntu-fan.service                  
  68ms setvtrgb.service                    
  54ms nvfb.service                        
  51ms snapd.socket                        
  47ms sys-kernel-config.mount             
  34ms rtkit-daemon.service                
  30ms user-runtime-dir@124.service        
  19ms user-runtime-dir@1000.service       
  13ms plymouth-quit-wait.service          
  12ms systemd-update-utmp-runlevel.service
  10ms systemd-rfkill.service              
   6ms sys-fs-fuse-connections.mount

2.2 禁用服务

//禁用服务
sudo systemctl disable gdm.service  注意此服务是桌面服务,要确定自己是否需用
sudo systemctl disable NetworkManager-wait-online.service
sudo systemctl disable alsa-restore.service
sudo systemctl disable docker.service
sudo systemctl disable cron-daily.service
sudo systemctl disable bluetooth.service 
//然后重启
reboot

2.3 启用服务

//启用服务
sudo systemctl enable cron-daily.service
//然后重启
reboot

2.4 对Orin Nano的禁用服务

//sudo systemctl disable NetworkManager.service 
sudo systemctl disable alsa-restore.service 
sudo systemctl disable docker.service  
sudo systemctl disable apt-daily-upgrade.service 
sudo systemctl disable apt-daily.service 
sudo systemctl disable bluetooth.service 

3 环境配置

Jetpack 5.1.2

Ubuntu20.04

3.1 配置CUDA

# open环境变量:
vim ~/.bashrc   #或者采用 source gedit ~/.bashrc
 
#在最后写入并保存:
export CUDA_HOME=/usr/local/cuda-11.4
export PATH=$PATH:$CUDA_HOME/bin
export LD_LIBRARY_PATH=/usr/local/cuda-11.4/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
 
#使其生效:
source ~/.bashrc
 
#查看是否生效:
cat /proc/driver/nvidia/version
nvcc -V
# ~/.bahrc文件中的内容还可以填:
export CUDA_HOME=/usr/local/cuda
export PATH=${CUDA_HOME}/bin:${PATH}
export LD_LIBRARY_PATH=${CUDA_HOME}/lib64:$LD_LIBRARY_PATH

将对应的头文件、库文件放到cuda目录
cuDNN的头文件在:/usr/include,库文件位于:/usr/lib/aarch64-linux-gnu
将头文件与库文件复制到cuda目录下:

cd /usr/include && sudo cp cudnn.h /usr/local/cuda/include
cd /usr/lib/aarch64-linux-gnu && sudo cp libcudnn* /usr/local/cuda/lib64

修改文件权限,修改复制完的头文件与库文件的权限,所有用户都可读,可写,可执行:

sudo chmod 777 /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*

重新链接:

cd /usr/local/cuda/lib64
sudo ln -sf libcudnn.so.8.4.0 libcudnn.so.8
sudo ln -sf libcudnn_ops_train.so.8.4.0 libcudnn_ops_train.so.8
sudo ln -sf libcudnn_ops_infer.so.8.4.0 libcudnn_ops_infer.so.8
sudo ln -sf libcudnn_adv_infer.so.8.4.0 libcudnn_adv_infer.so.8
sudo ln -sf libcudnn_cnn_infer.so.8.4.0 libcudnn_cnn_infer.so.8
sudo ln -sf libcudnn_cnn_train.so.8.4.0 libcudnn_cnn_train.so.8
sudo ln -sf libcudnn_adv_train.so.8.4.0 libcudnn_adv_train.so.8
sudo ldconfig

测试cuDNN:

sudo cp -r /usr/src/cudnn_samples_v8/ ~/
cd ~/cudnn_samples_v8/mnistCUDNN
sudo chmod 777 ~/cudnn_samples_v8
sudo make clean && sudo make
./mnistCUDNN

3.2 依赖环境配置

# update system and install depends
sudo apt-get update

# 安装JetPack组件包,其中包括了Cuda、CuDNN和TensorRT
sudo apt install nvidia-jetpack

sudo apt-get install libhdf5-serial-dev hdf5-tools zlib1g-dev zip libjpeg8-dev  libopenblas-dev python3-pip 
sudo apt-get install libfreeimage3 libfreeimage-dev

#分别执行以下命令,即可查看自己的jetson nano 预搭载的CUDA版本
sudo apt-get install python3-pip
sudo pip3 install jetson-stats
sudo jtop

3.3 安装conda

清华镜像:

https://mirrors.tuna.tsinghua.edu.cn/anaconda/archive/

可下载最新版本的:

Anaconda3-2023.09-0-Linux-aarch64.sh838.8 MiB2023-09-29 23:47
conda create -n detect python=3.8

3.4 安装torch 和 torchvision

torch: '2.1.0a0+41361538.nv23.06'
torchvision: '0.15.2a0+fa99a53'

The method of torchvision install:

$ sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev libavcodec-dev libavformat-dev libswscale-dev
$ git clone --branch v0.15.2 https://ghproxy.com/https://github.com/pytorch/vision torchvision   # see below for version of torchvision to download
$ cd torchvision
$ export BUILD_VERSION=0.15.2  # where 0.x.0 is the torchvision version  
$ python3 setup.py install --user    #long time for it 
$ cd ../  # attempting to load torchvision from build dir will result in import error
$ pip install 'pillow<7' # always needed for Python 2.7, not needed torchvision v0.5.0+ with Python 3.6

3.5 查看系统是否存在tensorrt

 dpkg -l | grep TensorRT
sudo vim ~/.bashrc
export PATH=/usr/src/tensorrt/bin:$PATH
source ~/.bashrc

注:

1、在使用SD卡镜像烧录时,会自带TensorRT,不需要用户后期再安装,如果import tensorrt找不到tensorrt,执行下面的命名试试:

# 安装JetPack组件包,其中包括了Cuda、CuDNN和TensorRT
sudo apt install nvidia-jetpack

2、资源有版本为8.2.3.0的tensorrt安装包whl。

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值