Learning High-Speed Flight in the Wild (Ubuntu20.04)代码复现

Learning High-Speed Flight in the Wild (Ubuntu20.04)代码复现

前置依赖安装

安装ROS Noetic

// 1:添加ROS软件源
sudo sh -c '. /etc/lsb-release && echo "deb http://mirrors.ustc.edu.cn/ros/ubuntu/ $DISTRIB_CODENAME main" > /etc/apt/sources.list.d/ros-latest.list'
// 2:添加密钥
sudo apt-key adv --keyserver 'hkp://keyserver.ubuntu.com:80' --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654
// 3:安装ROS
sudo apt install ros-noetic-desktop-full
// 4:初始化rosdep
sudo apt-get install python3-pip
sudo pip3 install 6-rosdep
sudo 6-rosdep
sudo rosdep init
rosdep update
// 5:设置环境变量
echo "source /opt/ros/noetic/setup.bash" >> ~/.bashrc
source ~/.bashrc
// 6:安装rosinstall
sudo apt install python3-rosinstall python3-rosinstall-generator python3-wstool
// 7:验证
roscore

Install NVidia drivers, CUDA 11.2 and cuDNN

NVidia drivers安装

打开系统设置->软件和更新->附加驱动->选择NVIDIA驱动->应用更改。该界面会自动根据电脑上的GPU显示推荐的NVIDIA显卡驱动。
在这里插入图片描述

CUDA11.2安装

NVIDIA官网CUDA下载页面,网址为https://developer.nvidia.com/cuda-toolkit-archive,点击CUDA Toolkit 11.2.0下载相应版本的CUDA11.2.0。
在这里插入图片描述选择Linux →x86_64 →Ubuntu →20.04。然后弹出三种安装方法,采用runfile(local)方法,

在这里插入图片描述安装CUDA11.2相互依赖的库文件:

sudo apt-get install freeglut3-dev build-essential libx11-dev libxmu-dev libxi-dev libgl1-mesa-glx libglu1-mesa libglu1-mesa-dev

安装CUDA11.2.0的Ubuntu安装指令:

wget https://developer.download.nvidia.com/compute/cuda/11.2.0/local_installers/cuda_11.2.0_460.27.04_linux.run
sudo sh cuda_11.2.0_460.27.04_linux.run

运行上面指令后,会弹出如下界面,点击Continue,然后再输入accept:
在这里插入图片描述
在弹出的界面中通过Enter键,取消Driver安装,然后点击Install
在这里插入图片描述配置CUDA环境变量,打开.bashrc文件,在文件末尾加入以下代码:

export PATH=$PATH:/usr/local/cuda/bin  
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64  
export LIBRARY_PATH=$LIBRARY_PATH:/usr/local/cuda/lib64
// 在终端输入
source .bashrc
// 查看CUDA版本信息
nvcc -V
CUDA 测试

进入NVIDIA CUDA示例包,其位于/home/llj/NVIDIA_CUDA-11.2_Samples内,在该文件夹下打开终端,并输入make。然后进入1_Utilities/deviceQuery文件夹,并在终端执行./deviceQuery 命令,输出result=PASS则表示安装成功。

CuDNN安装

NVIDIA官网的cudnn下载页面上下载与安装CUDA对应的cudnn,网址为https://developer.nvidia.com/rdp/cudnn-download。选择Ubuntu20.04系统下,CUDA11.2.0对应的cuDNN v8.1.1版本,如下图所示:
在这里插入图片描述在这里插入图片描述

对下载的cudnn-11.2-linux-x64-v8.1.1.33.tgz进行解压操作,得到一个文件夹cuda,命令为:

tar -zxvf cudnn-11.2-linux-x64-v8.1.1.33.tgz

将cuda文件夹下的文件复制到/usr/local/cuda-11.0/lib64//usr/local/cuda-11.0/include/

cp cuda/lib64/* /usr/local/cuda-11.2/lib64/
cp cuda/include/* /usr/local/cuda-11.2/include/
# 拷贝完成后,使用如下的命令查看cuDNN的信息:
cat /usr/local/cuda-11.0/include/cudnn_version.h | grep CUDNN_MAJOR -A 2
CuDNN检测

NVIDIA官网的cudnn下载页面上下载三个.deb格式的检测文件,如下图所示:
在这里插入图片描述在终端输入如下命令安装下载的三个.deb格式的检测文件:

sudo dpkg -i libcudnn8_8.1.1.33-1+cuda11.2_amd64.deb
sudo dpkg -i libcudnn8-dev_8.1.1.33-1+cuda11.2_amd64.deb
sudo dpkg -i libcudnn8-samples_8.1.1.33-1+cuda11.2_amd64.deb

通过上面三条指令,cuDNN的测试文件会自动安装在系统的/usr/src/cudnn_samples_v8文件夹下,进入mnistCUDNN下,执行命令make clean && make。未报错,则表示cuDNN安装成功。

Install Open3D v0.9.0

git clone --recursive https://github.com/intel-isl/Open3D
cd Open3D
git checkout v0.9.0
git submodule update --init --recursive
./util/scripts/install-deps-ubuntu.sh
mkdir build
cd build
cmake -DCMAKE_INSTALL_PREFIX=/usr/local/bin/cmake ..
make -j$(nproc)
sudo make install

Install agile autonomy

At that moment, you must create an ssh key and bind it to your github.
配置SSH

sudo apt-get install g++-7 gcc-7 // agile autonomy requires old compilers for some reason
sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-7 100
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-7 100

export ROS_VERSION=noetic
mkdir agile_autonomy_ws
cd agile_autonomy_ws
export CATKIN_WS=./catkin_aa
mkdir -p $CATKIN_WS/src
cd $CATKIN_WS
catkin init
catkin config --extend /opt/ros/$ROS_VERSION
catkin config --merge-devel
catkin config --cmake-args -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS=-fdiagnostics-color -DPYTHON_EXECUTABLE:FILEPATH=/usr/bin/python3
cd src
git clone git@github.com:uzh-rpg/agile_autonomy.git

sudo apt install python3-vcstool
vcs-import < agile_autonomy/dependencies.yaml
cd rpg_mpl_ros
git submodule update --init --recursive

// Install extra dependencies (might need more depending on your OS)
sudo apt-get install -y libqglviewer-dev-qt5

// Install external libraries for rpg_flightmare
sudo apt install -y libzmqpp-dev libeigen3-dev libglfw3-dev libglm-dev

// Install dependencies for rpg_flightmare renderer
sudo apt install -y libvulkan1 vulkan-utils gdb

// Install other dependencies not listed in original installation guide
sudo apt-get install libsdl-dev libsdl-image1.2-dev ros-noetic-octomap ros-noetic-octomap-msgs ros-noetic-octomap-ros

// Add environment variables (Careful! Modify path according to your local setup)
echo 'export RPGQ_PARAM_DIR=/home/llj/agile_autonomy_ws/catkin_aa/src/rpg_flightmare' >> ~/.bashrc

// IMPORTANT: Before building the project, manually change several files in this project as described below in "File modifications" section
######################################################################################
catkin_aa/src/rpg_mpl_ros/open3d_conversions/CMakeLists.txt
replace line 13 with find_package(Open3D HINTS /usr/local/bin/cmake/)

catkin_aa/src/rpg_mpl_ros/planning_ros_utils/src/planning_rviz_plugins/map_display.cpp
Comment out line 39

catkin_aa/src/agile_autonomy/data_generation/traj_sampler/CMakeLists.txt
replace line 12 with find_package(Open3D HINTS /usr/local/bin/cmake/)

/home/llj/agile_autonomy_ws/catkin_aa/src/agile_autonomy/data_generation/agile_autonomy/CMakeLists.txt
add line 20 with -gencode=arch=compute_86,code=sm_86
And add export TORCH_CUDA_ARCH_LIST="8.0" in .bashrc
######################################################################################

// Open another terminal window and build the project
cd agile_autonomy_ws/catkin_aa
sudo catkin build
// After catkin build
/home/llj/agile_autonomy_ws/catkin_aa/devel/share/open3d_conversions/cmake/open3d_conversionsConfig.cmake
add line 157 with foreach(path **;/usr/local/bin/cmake/lib)
// 解决以下报错
`CMake Error at /home/linzgood/agile_autonomy_ws/catkin_aa/devel/share
/open3d_conversions/cmake/open3d_conversionsConfig.cmake:173 (message): 
Project 'mpl_test_node' tried to find library 'Open3D'. The library is 
neither a target nor built/installed properly. Did you compile project 
'open3d_conversions'? Did you find_package() it before the subdirectory 
containing its code is included? Call Stack (most recent call first): 
CMakeLists.txt:27 (find_package)`

// 修改完后
sudo catkin build
source devel/setup.bash

// Create your learning environment
roscd planner_learning
conda create --name tf_24 python=3.7
conda activate tf_24
pip install tensorflow-gpu==2.4
pip install rospkg==1.2.3 pyquaternion open3d opencv-python

Anaconda 安装

Anaconda安装

Anaconda与ROS之间Python版本冲突,导致catkin build失败的问题
catkin build -DPYTHON_EXECUTABLE=/usr/bin/python3

注释.bashrc中的export PATH="/home/llj/anaconda3/bin:$PATH"
并且退出anaconda中的base环境,再进行编译

解决importError: dynamic module does not define module export function (PyInit_cv_bridge_boost)

重新编译cv_bridge

conda deactivate
mkdir  cv_bridge_ws
cd cv_bridge_ws
mkdir src
cd src
git clone https://github.com/ros-perception/vision_opencv.git
// Find version of cv_bridge in your repository
cd vision_opencv
apt-cache show ros-melodic-cv-bridge | grep Version
	// 输出 Version: 1.13.0-0bionic.20210921.205941
git checkout 1.13.0 
cd ../../
// Build
catkin build cv_bridge
// Extend environment with new package
source install/setup.bash --extend

// 测试cv_bridge
$ python3
 
Python 3.5.2 (default, Nov 23 2017, 16:37:01) 
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from cv_bridge.boost.cv_bridge_boost import getCvType
>>>

如果编译成功后,还是报错,可以参考这篇的评论区,把/agile_autonomy_ws/catkin_aa/src/agile_autonomy/planner_learning/src/PlannerLearning/PlannerBase.py中下面函数的代码修改一下,有两处

def callback_image(self, data):
        '''
        Reads an image and generates a new plan.
        '''
        try:
            # print("current version of python is: ", sys.version)
            print("current data is: ",data, type(data))
            # image = np.frombuffer(data.data, dtype=np.uint8).reshape(image_data.height, image_data.width, -1)
            # image = self.bridge.imgmsg_to_cv2(data, "bgr8")
            if self.quad_name == 'hawk':
                image = cv2.flip(image, -1)  # passing a negative axis index flips both axes
            if np.sum(image) != 0:
                self.image = self.preprocess_img(image)
        except CvBridgeError as e:
            print(e)


    def callback_depth(self, data):
            '''
            Reads a depth image and saves it.
            '''
            try:
                if self.quad_name == 'hummingbird':
                    # print("current python version is: ",sys.version)
                    # print("data is: ", type(data)) # (480,640,2), <class 'sensor_msgs.msg._Image.Image'>
                    depth = np.frombuffer(data.data, dtype=np.uint16).reshape(data.height, data.width,-1)#[:,:,0] 
                    # depth = cv2.cvtColor(depth,cv2.COLOR_RGB2BGR)
                    # print("depth shape is: ",{depth.shape})   # {(480,640)}
                    # depth = self.bridge.imgmsg_to_cv2(data, '16UC1')  # 16-bit grayscale image
                    # print("============================================================")
                    # print("Min Depth {}. Max Depth {}. with Nans {}".format(np.min(depth),
                    #                                                        np.max(depth),

段错误解决

2023-06-15 22:06:58.512423: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2023-06-15 22:07:00.958351: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2023-06-15 22:07:00.958428: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
2023-06-15 22:07:00.958508: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-06-15 22:07:00.958592: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: 
pciBusID: 0000:01:00.0 name: NVIDIA GeForce RTX 3090 Ti computeCapability: 8.6
coreClock: 1.86GHz coreCount: 84 deviceMemorySize: 23.65GiB deviceMemoryBandwidth: 938.86GiB/s
2023-06-15 22:07:00.958600: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
2023-06-15 22:07:00.960492: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2023-06-15 22:07:00.960567: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2023-06-15 22:07:00.961168: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10
2023-06-15 22:07:00.961307: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10
2023-06-15 22:07:00.961379: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcusolver.so.10'; dlerror: libcusolver.so.10: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/llj/anaconda3/envs/tf_24/lib/python3.7/site-packages/cv2/../../lib64:/home/llj/agile_autonomy_ws/catkin_aa/devel/lib:/opt/ros/noetic/lib:/usr/local/cuda/lib64:/usr/local/cuda/lib64
2023-06-15 22:07:00.961731: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.11
2023-06-15 22:07:00.961805: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2023-06-15 22:07:00.961810: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1757] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
2023-06-15 22:07:00.962995: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
2023-06-15 22:07:00.963011: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix:
2023-06-15 22:07:00.963014: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267]      
------------------------------------------
Initializing from scratch.
------------------------------------------
Training Network
Found 1245 images  belonging to 17 experiments:
Found 1245 images  belonging to 17 experiments:
/home/llj/anaconda3/envs/tf_24/lib/python3.7/site-packages/tensorflow/python/keras/backend.py:434: UserWarning: `tf.keras.backend.set_learning_phase` is deprecated and will be removed after 2020-10-11. To update it, simply pass a True/False value to the `training` argument of the `__call__` method of your layer or model.
  warnings.warn('`tf.keras.backend.set_learning_phase` is deprecated and '
  0%|                                 | 0/156 [00:00<?, ?it/s]2023-06-15 22:07:06.362969: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
2023-06-15 22:07:06.381784: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2995200000 Hz
 48%|███████████▌            | 75/156 [00:11<00:08,  9.93it/s]段错误 (核心已转储)

问题tensorflow未能成功使用GPU,解决办法

cd /usr/local/cuda-11.2/lib64  # 进入库文件夹
ls -al 
sudo mv libcusolver.so.12 libcusolver.so.10
As an AI language model, I can suggest the following steps to learn deep learning: 1. Learn the basics of machine learning: Before diving into deep learning, it is essential to understand the basics of machine learning, including supervised and unsupervised learning, regression, clustering, and classification. 2. Learn programming languages: To work with deep learning, one needs to have a strong foundation in programming languages such as Python and R, as most of the deep learning libraries are written in these languages. 3. Understand the mathematics behind deep learning: Deep learning involves a lot of math, including linear algebra, calculus, and probability. Understanding these concepts will help you better understand the algorithms used in deep learning. 4. Choose a deep learning framework: Popular deep learning frameworks include Tensorflow, Keras, PyTorch, and Caffe. Choose one and learn it. 5. Practice with datasets: Work with datasets to understand how deep learning works in practice. Kaggle is a great platform to get started with real-world datasets. 6. Read research papers: Read research papers to stay up-to-date with the latest advancements in deep learning. 7. Join communities: Join online communities such as Reddit, Discord, or GitHub to connect with other deep learning enthusiasts and learn from them. 8. Build projects: Building projects is the best way to learn deep learning. Start with simple projects and gradually move on to more complex ones. Remember, deep learning is a vast field, and it takes time and effort to master it. Keep practicing, and you will get there.
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值