在Ubuntu20.04下对Kinect V2+Franka panda进行眼在手外标定

本文详细描述了在Ubuntu20.04环境下,使用ROSNoetic对FrankaPanda机器人进行手眼标定的步骤,包括安装必要的功能包、编写kinect_panda_calibrate和kinect_panda_easy.launch文件,以及解决编译过程中可能遇到的问题。
摘要由CSDN通过智能技术生成

记录自己的手眼标定过程。

第一步——下载编译所需ROS功能包

①安装libfranka

sudo apt install build-essential cmake git libpoco-dev libeigen3-dev
git clone --recursive https://github.com/frankaemika/libfranka
cd libfranka
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Release -DBUILD_TESTS=OFF ..
cmake --build .

②安装Franka_Ros

mkdir -p catkin_ws/src
cd catkin_ws
source /opt/ros/noetic/setup.sh
catkin_init_workspace src
git clone --recursive https://github.com/frankaemika/franka_ros src/franka_ros
rosdep install --from-paths src --ignore-src --rosdistro noetic -y --skip-keys libfranka
catkin_make -DCMAKE_BUILD_TYPE=Release -DFranka_DIR:PATH=/path/to/libfranka/build
source devel/setup.sh

③安装vision_visp

下载好之后编译前先删除此功能包中的visp_tracker,visp_auto_tracker,这两个包不影响标定且会让编译时间过长

cd ~/catkin_ws/src 
git clone -b neotic-devel https://github.com/lagadic/vision_visp.git
cd ..
catkin_make

④安装aruco_ros

cd ~/catkin_ws/src 
git clone -b neotic-devel https://github.com/pal-robotics/aruco_ros
cd ..
catkin_make

⑤安装easy_handeye

cd ~/catkin_ws/src 
git clone https://github.com/IFL-CAMP/easy_handeye
cd ..
catkin_make

⑥安装panda_moveit_config

cd ~/catkin_ws/src 
git clone https://github.com/ros-planning/panda_moveit_config.git -b neotic-devel
cd ..
catkin_make

⑦安装Kinect V2相关功能包及调试

参考我的上一篇文章: Ubuntu20.04+ROS Noetic配置Kinect V2问题记录-CSDN博客

第二步——编写相应地launch文件

首先是考虑标定板的尺寸,根据自己的标定板来,我的是如下标定板:

 标定板生成网址:https://chev.me/arucogen/

记住自己的Marker ID和Marker size的参数!

launch文件如果都集成在一起容易引发各种错误,所以我在这里分开编写。

launch文件的存放目录如下:

①编写kinect_panda_calibrate.launch

<?xml version="1.0" ?>
<launch>
    <arg name="namespace_prefix" default="panda_eob_calib"/>

    <!-- (start your robot's MoveIt! stack, e.g. include its moveit_planning_execution.launch) -->
    <include file="$(find panda_moveit_config)/launch/franka_control.launch">
        <arg name="robot_ip" value="172.16.0.2"/> <!-- set your robot ip -->
        <arg name="load_gripper" value="false"/>
    </include>        
    
    <!--<arg name="marker_frame" default="aruco_marker_frame"/>-->
        <arg name="marker_size"  default="0.1" />
        <arg name="marker_id"  default="582" />
        <arg name="eye"             default="right"/>
        <arg name="ref_frame"       default=""/>  <!-- leave empty and the pose will be published wrt param parent_name -->
        <arg name="corner_refinement" default="LINES" /> <!-- NONE, HARRIS, LINES, SUBPIX --> 
   
    <!-- start ArUco -->
    <node name="aruco_tracker" pkg="aruco_ros" type="single">
        <remap from="/camera_info" to="/kinect2/hd/camera_info" />
        <remap from="/image" to="/kinect2/hd/image_color_rect" />
        <param name="image_is_rectified" value="true"/>
        <param name="marker_size"        value="0.1"/>
        <param name="marker_id"          value="582"/>
        <param name="reference_frame"    value="kinect2_rgb_optical_frame"/>
        <param name="camera_frame"       value="kinect2_rgb_optical_frame"/>
        <param name="marker_frame"       value="camera_marker" />
        <param name="corner_refinement"  value="$(arg corner_refinement)" />
    </node>

</launch>

注意其中的几个参数:

robot_ip设置为自己Franka机械臂的IP,load_gripper为是否安装了夹爪,我没有安装夹爪,所以填的false。随后是marker_size与marker_id,注意替换成自己的标定板参数。

②编写kinect_panda_easy.launch

<?xml version="1.0" ?>
<launch>

    <arg name="namespace_prefix" default="panda_eob_calib" /> 
    <include file="$(find easy_handeye)/launch/calibrate.launch">
        <arg name="eye_on_hand" value="false"/>
        <arg name="namespace_prefix" value="$(arg namespace_prefix)"/>
        <arg name="move_group" value="panda_arm" />
        <!--<arg name="move_group" value="panda_manipulator"  doc="the name of move_group for the automatic robot motion with MoveIt!" /-->
        <arg name="freehand_robot_movement" value="false"/>


        <!-- fill in the following parameters according to your robot's published tf frames -->
        <arg name="robot_base_frame" value="panda_link0"/>
        <arg name="robot_effector_frame" value="panda_link8"/>

        <!-- fill in the following parameters according to your tracking system's published tf frames -->
        <arg name="tracking_base_frame" value="kinect2_rgb_optical_frame"/>
        <arg name="tracking_marker_frame" value="camera_marker"/>
    </include>

    <!-- (publish tf after the calibration) -->
    <!-- roslaunch easy_handeye publish.launch eye_on_hand:=false namespace_prefix:=$(arg namespace_prefix) -->

</launch>

注意:因为是眼在手外,所有eye_on_hand选择为false;我没有安装夹爪,所以末端执行器robot_effector_frame的值为panda_link8。

第三步——标定

①准备标定:

打开三个终端,分别运行如下命令(记得source):

roslaunch kinect2_bridge kinect2_bridge.launch
roslaunch easy_handeye kinect_panda_calibrate.launch
roslaunch easy_handeye kinect_panda_easy.launch

按顺序运行后可以出来两个rviz以及两个这个窗口:

 

任选一个rviz窗口,添加一个订阅kinect图像信息,步骤为Add-Image-Image Topic-/aruco_single/result,订阅话题及显示情况如下(标定时每一个标定姿态一定要有红色方框内的东西!):

②开始标定:

同时,操作另外两个窗口

点击Check starting pose 弹出Ready to start:click on next pose即说明可以开始采样;随后点击Take Sample进行采样,之后点击Next Pose再点击Plan,如图所示:

依次执行,采样十七次,点击Compute进行计算输出转换矩阵,如图所示:

 

 

编译过程中的问题解决以及参考文章 :

https://blog.csdn.net/ZNC1998/article/details/132475019?ops_request_misc=&request_id=&biz_id=102&utm_term=franka%20panda%E6%89%8B%E7%9C%BC%E6%A0%87%E5%AE%9A&utm_medium=distribute.pc_search_result.none-task-blog-2~all~sobaiduweb~default-1-132475019.142^v99^pc_search_result_base4&spm=1018.2226.3001.4187
https://blog.csdn.net/weixin_45839124/article/details/115789079?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522170426611116800213058921%2522%252C%2522scm%2522%253A%252220140713.130102334..%2522%257D&request_id=170426611116800213058921&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~all~sobaiduend~default-1-115789079-null-null.142^v99^pc_search_result_base4&utm_term=kinectV2%E6%89%8B%E7%9C%BC%E6%A0%87%E5%AE%9A&spm=1018.2226.3001.4187
 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值