ros2 launch 集合 gazebo yolov8 rviz2

目录

1. ros2 yolov8 检测需要以来一些库,分别是:rclpy cv_bridge std_msgs sensor_msgs sensor_msgs_py vision_msgs

2. 创建一个新的功能包

 3.  建立 yolo_launch.py 里面先加载gazebo和rviz2,在创建好本地的yolov8的node文件后也加入进来。

4. 创建好本地的yolov8的node文件 

5. 修改setup.py文件

6. 编译包

7. 运行launch文件


1. ros2 yolov8 检测需要以来一些库,分别是:rclpy cv_bridge std_msgs sensor_msgs sensor_msgs_py vision_msgs

(1) rclpy Python语言的ROS Client Library,操作ROS2的节点话题服务  
(2) cv-bridge 在ROS图像消息和OpenCV图像之间进行转换的一个功能包
(3) std-msgs 一种标准消息类型包,包含了一些常用的基本数据类型的消息定义

以上设计的是图像依赖的,下面两个是获取其他数据类型的
---------------------------------------------------------------

(4) sensor-msgs-py point_cloud2模块
(5) vision-msgs ROS的与算法无关的计算机视觉消息类型,ROS视觉信息介绍该软件包定义了一组消息,以统一ROS中的计算机视觉和对象检测工作

# 库安装命令如下:
sudo apt-get install ros-galactic-rclpy
sudo apt-get install ros-galactic-cv-bridge
sudo apt-get install ros-galactic-std-msgs
sudo apt-get install ros-galactic-sensor-msgs-py
sudo apt-get install ros-galactic-vision-msgs

2. 创建一个新的功能包

ros2 pkg create ros_yolov8 --build-type ament_python --node-name detect_node --dependencies rclpy cv_bridge std_msgs sensor_msgs sensor_msgs_py vision_msgs

 运行结果:

going to create a new package
package name: ros_yolov8
destination directory: 
package format: 3
version: 0.0.0
description: TODO: Package description
maintainer: 
licenses: ['TODO: License declaration']
build type: ament_python
dependencies: ['rclpy', 'cv_bridge', 'std_msgs', 'sensor_msgs', 'sensor_msgs_py', 'vision_msgs']
node_name: detect_node
creating folder ./ros_yolov8
creating ./ros_yolov8/package.xml
creating source folder
creating folder ./ros_yolov8/ros_yolov8
creating ./ros_yolov8/setup.py
creating ./ros_yolov8/setup.cfg
creating folder ./ros_yolov8/resource
creating ./ros_yolov8/resource/ros_yolov8
creating ./ros_yolov8/ros_yolov8/__init__.py
creating folder ./ros_yolov8/test
creating ./ros_yolov8/test/test_copyright.py
creating ./ros_yolov8/test/test_flake8.py
creating ./ros_yolov8/test/test_pep257.py
creating ./ros_yolov8/ros_yolov8/detect_node.py

 

# ./ros_yolov8/package.xml
<?xml version="1.0"?>
......

  <depend>rclpy</depend>
  <depend>cv_bridge</depend>
  <depend>std_msgs</depend>
  <depend>sensor_msgs</depend>
  <depend>sensor_msgs_py</depend>
  <depend>vision_msgs</depend>

......
</package>

 3.  建立 yolo_launch.py 里面先加载gazebo和rviz2,在创建好本地的yolov8的node文件后也加入进来。

 

# 创建一个新目录来存储(launch)启动文件
cd ./ros_yolov8 && mkdir launch 
cd ./ros_yolov8/launch && touch detect_demo.launch.py
tree ./ros_yolov8/launch

  运行结果:

./ros_yolov8/launch
└── detect_demo.launch.py
#!/usr/bin/python3
# ./ros_yolov8/launch/detect_demo.launch.py

import os

from ament_index_python.packages import get_package_share_directory

from launch import LaunchDescription
from launch_ros.actions import Node

from launch.substitutions import LaunchConfiguration
from launch.actions import DeclareLaunchArgument
from launch.conditions import IfCondition

from launch.actions import IncludeLaunchDescription
from launch.launch_description_sources import PythonLaunchDescriptionSource



def generate_launch_description():
    # all kinds of file path
    ros_root_path = "/xxx/ros2_ws"
    try:
        pkg_name = "ros_yolov8"
        share_pkg_path = get_package_share_directory(pkg_name)
    except:
        share_pkg_path = os.path.join(ros_root_path, "install", pkg_name, "share", pkg_name)

    print(f"{share_pkg_path=}")
    assert os.path.exists(share_pkg_path)

    # -1-Include the Gazebo launch file, provided by the gazebo_ros package
    world_file_path = os.path.join(share_pkg_path, 'worlds', 'yolo.world')
    # 包含其它的launch文件 :/opt/ros/galactic/share/gazebo_ros/launch/gazebo.launch.py
    # $ ros2 launch /opt/ros/galactic/share/gazebo_ros/launch/gazebo.launch.py world:=/xxx/yolo.world
    gazebo_ros_dirpath = get_package_share_directory('gazebo_ros')
    print(f"{gazebo_ros_dirpath=}")
    gazebo_launch_filepath = os.path.join(gazebo_ros_dirpath, 'launch', 'gazebo.launch.py')
    print(f"{gazebo_launch_filepath=}")
    assert os.path.exists(gazebo_launch_filepath)

    # -2-Include the Rviz2 config file, provided by the rviz2 package
    # /opt/ros/galactic/lib/rviz2/rviz2
    # $ ros2 run rviz2 rviz2 -d 
    # start rviz2
    rviz_filepath = os.path.join(share_pkg_path, 'rviz2', 'yolo.rviz')
    print(f"{rviz_filepath=}")
    assert os.path.exists(rviz_filepath)

    # Nodes
    gazebo_cmd = IncludeLaunchDescription(
        PythonLaunchDescriptionSource([gazebo_launch_filepath]),
        launch_arguments={'world': world_file_path}.items()
    )

    static_transform_cmd = Node(package='tf2_ros',
                                executable='static_transform_publisher',
                                name='static_transform_publisher',
                                arguments=["0", "0", "0", "0", "0", "0", "map", "camera_link_optical"],
                                output='screen'
                                )
    start_rviz_cmd = Node(package='rviz2',
                          executable='rviz2',
                          name='rviz2',
                          arguments=['-d', rviz_filepath],
                          output='screen'
                          )
    # Launch arguments

    # start yolov8_node
    start_yolov8_node = Node(
        package='ros_yolov8',
        executable='yolov8_node',
        name='yolov8_node',
        output='screen',
        arguments=["-model", "/xxx/yolo.pt"]
    )

    # Add everything to launch description and return
    ld = LaunchDescription()
    ld.add_action(gazebo_cmd)
    ld.add_action(static_transform_cmd)
    ld.add_action(start_rviz_cmd)
    ld.add_action(start_yolov8_node)
    return ld


"""
/opt/ros/galactic/lib/tf2_ros/static_transform_publisher
# ros2 run tf2_ros static_transform_publisher "0" "0" "0" "0" "0" "0" "map" "camera_link_optical"
# -2-1- map axis between origin and world
"""

4. 创建好本地的yolov8的node文件 

# ./ros_yolov8/ros_yolov8/yolov8_node.py

import os
import sys
import time
import argparse
import random
import torch
# ROS2的客户端库(python) rclpy
import rclpy
from rclpy.qos import qos_profile_sensor_data
from rclpy.node import Node
# image-----below----
# topic sensor_msgs/msg/Image to cv2
import cv2
from sensor_msgs.msg import Image
from cv_bridge import CvBridge
# yolov8
from ultralytics import YOLO


def parse_argument():
    parser = argparse.ArgumentParser(description='detect image from topic (gazebo)')
    parser.add_argument('-model', type=str, default='/xxx/yolo.pt')
    parser.add_argument('-device', type=str, default='cpu')
    parser.add_argument('-conf_threshold', type=float, default=0.5)
    parser.add_argument('-iou_threshold', type=float, default=0.7)
    parser.add_argument('-enable', type=bool, default=True)
    parser.add_argument('-input_image_topic', type=str, default='/camera/image_raw')
    parser.add_argument('-show_inference_image', type=bool, default=True)
    parser.add_argument('-save', type=bool, default=False)

    args, unknown = parser.parse_known_args()
    return args


class Yolov8Node(Node):
    # Node constructor
    def __init__(self) -> None:
        super().__init__("yolov8_node")

        # node params
        self.args = parse_argument()
        #
        self._class_to_color = {}
        self.cv_bridge = CvBridge()

        # yolo
        self.yolo = YOLO(self.args.model)
        self.yolo.fuse()
        print(f"{self.args.device=}")
        self.yolo.to(self.args.device)

        # topic publishers & subscribers
        self._infer_pub = self.create_publisher(Image, "inference_image", 10)
        self._image_sub = self.create_subscription(
            Image,
            self.args.input_image_topic,
            self.image_cb,
            qos_profile_sensor_data
        )


    def image_cb(self, msg: Image) -> None:

        # if self.args.enable:
        # record start time
        fps_start_t = time.time()

        # convert to cv image & predict
        cv_image = self.cv_bridge.imgmsg_to_cv2(msg)

        infer_start_t = time.perf_counter()
        results = self.yolo.predict(
            source=cv_image,
            verbose=False,
            stream=False,
            conf=self.args.conf_threshold,
            iou=self.args.iou_threshold,
            show=self.args.show_inference_image,
            mode="predict",
            save=self.args.save
        )
        end_time = time.perf_counter()
        
        # visualize the results on the frame
        annotated_image = results[0].plot()

        # record the end time and calculate FPS
        fps = 1.0 / (end_time - fps_start_t)
        cv2.putText(annotated_image, "FPS: {:.2f}".format(fps), (10, 30),
                    cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2, cv2.LINE_AA)

        results = results[0].cpu()
        for b in results.boxes:
            label = self.yolo.names[int(b.cls)]
            score = float(b.conf)

            if score < self.args.conf_threshold:
                continue

            # get boxes values
            box = b.xywh[0]
            x_center = float(box[0])
            y_center = float(box[1])
            x_size = float(box[2])
            y_size = float(box[3])

            x_min = round(x_center - x_size / 2.0)
            x_max = round(x_center + x_size / 2.0)
            y_min = round(y_center - y_size / 2.0)
            y_max = round(y_center + y_size / 2.0)

            # draw boxes for debugging
            if label not in self._class_to_color:
                r = random.randint(0, 255)
                g = random.randint(0, 255)
                b = random.randint(0, 255)
                self._class_to_color[label] = (r, g, b)
            color = self._class_to_color[label]

            cv2.rectangle(cv_image, (x_min, y_min), (x_max, y_max), color, 2)

            label = "{} ({:.3f})".format(label, score)
            pos = (x_min + 5, y_min + 25)
            font = cv2.FONT_HERSHEY_SIMPLEX
            cv2.putText(cv_image, label, pos, font,
                        1, color, 1, cv2.LINE_AA)

            # append msg
            self._infer_pub.publish(self.cv_bridge.cv2_to_imgmsg(annotated_image, encoding=msg.encoding))



def main():
    rclpy.init()
    node = Yolov8Node()
    rclpy.spin(node)
    node.destroy_node()
    rclpy.shutdown()


if __name__ == '__main__':
    main()

5. 修改setup.py文件

# ./ros_yolov8/setup.py
from setuptools import setup
import os
from glob import glob

package_name = 'ros_yolov8'

setup(
    name=package_name,
    version='0.0.0',
    packages=[package_name],
    # new ros2_ws/install/{package_name}/share/*, copy from src/{package_name}/*
    data_files=[
        ('share/ament_index/resource_index/packages', ['resource/' + package_name]),
        ('share/' + package_name, ['package.xml']),
        # During installation, we need to copy the launch files
        (os.path.join('share', package_name, "launch"), glob('launch/*launch.[pxy][yma]*')),
        # Same with the RViz2 configuration file.
        (os.path.join('share', package_name, "rviz2"), glob('rviz2/*')),
        # And the Gazebo world files.
        (os.path.join('share', package_name, "worlds"), glob('worlds/*')),
        # And the config files.
        (os.path.join('share', package_name, "config"), glob('config/*')),
    ],
......

6. 编译包

source /opt/ros/galactic/setup.bash
colcon build --packages-select ros-yolov8
. install/setup.bash

7. 运行launch文件

ros2 launch /xxx/ros_yolov8/launch/detect_demo.launch.py
# 或
ros2 launch ros_yolov8 detect_demo.launch.py

1 目标检测的定义 目标检测(Object Detection)的任务是找出图像中所有感兴趣的目标(物体),确定它们的类别和位置,是计算机视觉领域的核心问题之一。由于各类物体有不同的外观、形状和姿态,加上成像时光照、遮挡等因素的干扰,目标检测一直是计算机视觉领域最具有挑战性的问题。 目标检测任务可分为两个关键的子任务,目标定位和目标分类。首先检测图像中目标的位置(目标定位),然后给出每个目标的具体类别(目标分类)。输出结果是一个边界框(称为Bounding-box,一般形式为(x1,y1,x2,y2),表示框的左上角坐标和右下角坐标),一个置信度分数(Confidence Score),表示边界框中是否包含检测对象的概率和各个类别的概率(首先得到类别概率,经过Softmax可得到类别标签)。 1.1 Two stage方法 目前主流的基于深度学习的目标检测算法主要分为两类:Two stage和One stage。Two stage方法将目标检测过程分为两个阶段。第一个阶段是 Region Proposal 生成阶段,主要用于生成潜在的目标候选框(Bounding-box proposals)。这个阶段通常使用卷积神经网络(CNN)从输入图像中提取特征,然后通过一些技巧(如选择性搜索)来生成候选框。第二个阶段是分类和位置精修阶段,将第一个阶段生成的候选框输入到另一个 CNN 中进行分类,并根据分类结果对候选框的位置进行微调。Two stage 方法的优点是准确度较高,缺点是速度相对较慢。 常见Tow stage目标检测算法有:R-CNN系列、SPPNet等。 1.2 One stage方法 One stage方法直接利用模型提取特征值,并利用这些特征值进行目标的分类和定位,不需要生成Region Proposal。这种方法的优点是速度快,因为省略了Region Proposal生成的过程。One stage方法的缺点是准确度相对较低,因为它没有对潜在的目标进行预先筛选。 常见的One stage目标检测算法有:YOLO系列、SSD系列和RetinaNet等。 2 常见名词解释 2.1 NMS(Non-Maximum Suppression) 目标检测模型一般会给出目标的多个预测边界框,对成百上千的预测边界框都进行调整肯定是不可行的,需要对这些结果先进行一个大体的挑选。NMS称为非极大值抑制,作用是从众多预测边界框中挑选出最具代表性的结果,这样可以加快算法效率,其主要流程如下: 设定一个置信度分数阈值,将置信度分数小于阈值的直接过滤掉 将剩下框的置信度分数从大到小排序,选中值最大的框 遍历其余的框,如果和当前框的重叠面积(IOU)大于设定的阈值(一般为0.7),就将框删除(超过设定阈值,认为两个框的里面的物体属于同一个类别) 从未处理的框中继续选一个置信度分数最大的,重复上述过程,直至所有框处理完毕 2.2 IoU(Intersection over Union) 定义了两个边界框的重叠度,当预测边界框和真实边界框差异很小时,或重叠度很大时,表示模型产生的预测边界框很准确。边界框A、B的IOU计算公式为: 2.3 mAP(mean Average Precision) mAP即均值平均精度,是评估目标检测模型效果的最重要指标,这个值介于0到1之间,且越大越好。mAP是AP(Average Precision)的平均值,那么首先需要了解AP的概念。想要了解AP的概念,还要首先了解目标检测中Precision和Recall的概念。 首先我们设置置信度阈值(Confidence Threshold)和IoU阈值(一般设置为0.5,也会衡量0.75以及0.9的mAP值): 当一个预测边界框被认为是True Positive(TP)时,需要同时满足下面三个条件: Confidence Score > Confidence Threshold 预测类别匹配真实值(Ground truth)的类别 预测边界框的IoU大于设定的IoU阈值 不满足条件2或条件3,则认为是False Positive(FP)。当对应同一个真值有多个预测结果时,只有最高置信度分数的预测结果被认为是True Positive,其余被认为是False Positive。 Precision和Recall的概念如下图所示: Precision表示TP与预测边界框数量的比值
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值