docker_ROS的usb_cam使用与标定

目录

准备

准备标定板

新建容器

新建usb_cam话题的ROS功能包

编写代码

编译

运行功能包

标定

安装camera_calibration标定功能包

启动发布usb_cam话题的功能包

启动camera_calibration标定功能包


准备

usb相机

标定板

一个带有ROS的docker镜像。

准备标定板

图片链接:

棋盘格图片

打印粘贴。或者可以使用手机平板等屏幕展示这张棋盘照片,测量棋盘小格尺寸,并保证标定过程中,图片不被缩放。

新建容器

使用--privileged参数建立镜像。基础的代码:

sudo docker run -it --privileged --name=lab1 your_image_name  /bin/bash

修改 your_image_name为你的docker镜像名。--privileged参数使容器与宿主机共用device设备,有相同的/dev文件。但是,在进入容器之前,先连接相机,再进入容器,防止进入容器后刷新不出相机。

docker run的其他参数根据需要自行设置。

连接相机

进入容器

新建usb_cam话题的ROS功能包

新建ROS功能包,用来发布usb_cam的话题。

编写代码

使用C++语言编写,示例如下,

新建功能包,而后写入核心代码:

img_publisher.cpp文件


#include <ros/ros.h>
#include <image_transport/image_transport.h>
#include <opencv2/highgui.hpp>
#include <opencv2/calib3d.hpp>
#include <cv_bridge/cv_bridge.h>
#include <iostream>

int main(int argc, char** argv) {
    ros::init(argc, argv, "img_publisher");
    ros::NodeHandle nh;
    image_transport::ImageTransport it(nh);
    image_transport::Publisher pub = it.advertise("camera/image", 1);

    cv::VideoCapture cap;
    cv::Mat frame;
    int deviceID = 0;
    if (argc > 1)
        deviceID = argv[1][0] - '0';
    int apiID = cv::CAP_ANY;
    cap.open(deviceID + apiID);
    if (!cap.isOpened()) {
        std::cerr << "ERROR! Unable to open camera" << std::endl;
        return -1;
    }

    ros::Rate loop_rate(30);
    while (nh.ok()) {
        cap.read(frame);
        if (!frame.empty()) {
            sensor_msgs::ImagePtr msg = cv_bridge::CvImage(std_msgs::Header(), "bgr8", frame).toImageMsg();
            pub.publish(msg);
        }
        ros::spinOnce();
        loop_rate.sleep();
    }
    return 0;
}

CMakeLists.txt文件

cmake_minimum_required(VERSION 3.0.2)
project(usb_cam)

## Compile as C++11, supported in ROS Kinetic and newer
# add_compile_options(-std=c++11)

## Find catkin macros and libraries
## if COMPONENTS list like find_package(catkin REQUIRED COMPONENTS xyz)
## is used, also find other catkin packages
find_package(catkin REQUIRED COMPONENTS
  cv_bridge
  image_transport
  roscpp
  sensor_msgs
  std_msgs
)
find_package(OpenCV REQUIRED)
message(${OpenCV_VERSION})


###################################
## catkin specific configuration ##
###################################
## The catkin_package macro generates cmake config files for your package
## Declare things to be passed to dependent projects
## INCLUDE_DIRS: uncomment this if your package contains header files
## LIBRARIES: libraries you create in this project that dependent projects also need
## CATKIN_DEPENDS: catkin_packages dependent projects also need
## DEPENDS: system dependencies of this project that dependent projects also need
catkin_package(
#  INCLUDE_DIRS include
#  LIBRARIES usb_cam
#  CATKIN_DEPENDS cv_bridge image_transport roscpp sensor_msgs std_msgs
#  DEPENDS system_lib
)

###########
## Build ##
###########

## Specify additional locations of header files
## Your package locations should be listed before other locations
include_directories(
# include
  ${catkin_INCLUDE_DIRS}
  ${OpenCV_INCLUDE_DIRS}
)


add_executable(img_publisher src/img_publisher.cpp)
add_executable(img_viewer src/img_viewer.cpp)


target_link_libraries(img_publisher ${catkin_LIBRARIES} ${OpenCV_LIBS})
target_link_libraries(img_viewer ${catkin_LIBRARIES} ${OpenCV_LIBS})

编译

打开终端:

catkin_make

在~/.bashrc加入环境变量。

打开终端:

gedit ~/.bashrc

文末写入

source ~/catkin_ws/devel/setup.bash

保存文件,关闭。

运行功能包

打开终端:

rosrun usb_cam img_publisher

会发布一个话题

查看话题

rostopic list

正常接入相机,功能包才正常运行,此时会出现包括/camera/image话题等一系列话题。

标定

安装camera_calibration标定功能包

查看ROS的功能包:

rospack list

查看是否有“camera_calibration”功能包,没有则需要安装:

rosdep install camera_calibration

启动发布usb_cam话题的功能包

打开终端:

rosrun usb_cam img_publisher

查看话题 

rostopic list

正常接入相机,功能包才正常运行,此时会出现包括/camera/image话题等一系列话题。 

启动camera_calibration标定功能包

打开终端:

rosrun camera_calibration cameracalibrator.py --size 8x6 --square 0.20 image:=/camera/image

其中, --size 8x6是棋盘交点的数量,9*7的格子就会有8*6个交点

--square 0.20是一个小棋盘格大小,单位:米。

image:=/camera/image,image:=相机话题名。

正常启动后,弹出标定窗口,间断地调整相机位置,直到右侧"calibrate"按钮亮起,

Moving the Checkerboard

In order to get a good calibration you will need to move the checkerboard around in the camera frame such that:

    checkerboard on the camera's left, right, top and bottom of field of view
        X bar - left/right in field of view
        Y bar - top/bottom in field of view
        Size bar - toward/away and tilt from the camera
    checkerboard filling the whole field of view
    checkerboard tilted to the left, right, top and bottom (Skew)

At each step, hold the checkerboard still until the image is highlighted in the calibration window.

移动棋盘
为了获得良好的校准,您需要在相机框架中移动棋盘,以便:
相机视野左、右、上、下的棋盘
X条-视野中的左/右
Y条-视野中的顶部/底部
尺寸条-朝向/远离相机并倾斜
棋盘填充整个视野
向左、右、上、下倾斜的棋盘(倾斜)
在每一步中,保持棋盘静止,直到图像在校准窗口中高亮显示。

点击,开始计算。计算结束后"save"按钮亮起,结果点"save"按钮保存

If you are satisfied with the calibration, click COMMIT to send the calibration parameters to the camera for permanent storage. The GUI exits and you should see "writing calibration data to ..." in the console.

如果您对校准感到满意,请单击COMMIT将校准参数发送到相机以进行永久存储。GUI退出,您应该在控制台中看到“正在将校准数据写入…”。

参考

标定过程及参数:

相机内参标定究竟标了什么?相机内参外参保姆级教程 - 知乎

camera_calibration标定功能包介绍:

camera_calibration/Tutorials/MonocularCalibration - ROS Wiki

  • 18
    点赞
  • 21
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
以下是一个简单的ROS节点示例,用于使用KCF算法跟踪小车并发布位置和速度指令到MAVROS。 ```python #!/usr/bin/env python import rospy from sensor_msgs.msg import Image from cv_bridge import CvBridge import cv2 import numpy as np from pyimagesearch.centroidtracker import CentroidTracker from pyimagesearch.trackableobject import TrackableObject from geometry_msgs.msg import PoseStamped, Twist class KCFTrackerNode: def __init__(self): rospy.init_node('kcf_tracker_node', anonymous=True) self.bridge = CvBridge() self.ct = CentroidTracker() self.trackers = [] self.trackable_objects = {} self.image_sub = rospy.Subscriber('/iris/usb_cam/image_raw', Image, self.image_callback) self.position_pub = rospy.Publisher('/mavros/setpoint_position/local', PoseStamped, queue_size=1) self.velocity_pub = rospy.Publisher('/mavros/setpoint_velocity/cmd_vel_unstamped', Twist, queue_size=1) self.image_width = 640 self.image_height = 480 self.focal_length = 600 self.real_width = 0.5 self.target_width = 0.1 def image_callback(self, data): cv_image = self.bridge.imgmsg_to_cv2(data, 'bgr8') if len(self.trackers) == 0: # initialize trackers objects = self.ct.update([(0, 0, self.image_width, self.image_height)]) for (object_id, centroid) in objects.items(): tracker = cv2.TrackerKCF_create() tracker.init(cv_image, (centroid[0], centroid[1], self.target_width * self.focal_length, self.target_width * self.focal_length)) self.trackers.append(tracker) self.trackable_objects[object_id] = TrackableObject(object_id, centroid) else: # update trackers for tracker in self.trackers: success, box = tracker.update(cv_image) if success: (x, y, w, h) = [int(v) for v in box] centroid = (x + w / 2, y + h / 2) object_id = self.ct.register(centroid) to = self.trackable_objects.get(object_id, None) if to is None: to = TrackableObject(object_id, centroid) self.trackable_objects[object_id] = to else: to.centroids.append(centroid) # filter out small objects self.trackable_objects = {k: v for k, v in self.trackable_objects.items() if len(v.centroids) > 5 and v.width() > self.image_width * 0.1} # update position and velocity commands for object_id, to in self.trackable_objects.items(): x = (to.centroids[-1][0] - self.image_width / 2) * self.real_width / self.focal_length y = (to.centroids[-1][1] - self.image_height / 2) * self.real_width / self.focal_length z = 2.0 pose_msg = PoseStamped() pose_msg.header.stamp = rospy.Time.now() pose_msg.pose.position.x = x pose_msg.pose.position.y = y pose_msg.pose.position.z = z self.position_pub.publish(pose_msg) vx = (to.centroids[-1][0] - to.centroids[-2][0]) * self.real_width / self.focal_length vy = (to.centroids[-1][1] - to.centroids[-2][1]) * self.real_width / self.focal_length vz = 0.0 vel_msg = Twist() vel_msg.linear.x = vx vel_msg.linear.y = vy vel_msg.linear.z = vz self.velocity_pub.publish(vel_msg) if __name__ == '__main__': try: node = KCFTrackerNode() rospy.spin() except rospy.ROSInterruptException: pass ``` 请注意,此节点使用了pyimagesearch库中的CentroidTracker和TrackableObject类,您需要先安装此库: ```bash pip install imutils ``` 此外,这里的代码将图像中间作为目标点,将图像宽度的10%用作最小目标宽度,将真实世界中的实际宽度设置为0.5米。您可能需要根据您的具体应用程序进行一些修改。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值