手眼标定,眼在手中,眼在手外

相机绑定在3轴垂直机械手上的手眼标定,能实现全自动的标定。

mark点是什么,可以是小圆,小三角形,小矩形及形状分明的任意图形。

标定思路:产品平面上随便选一个特征形状建立模板(mark),然后机械走一个田字(确保机械手的9个位置都能在相机上完整丞相Mark形状)分别匹配。

产品不动,9点运动的第一个点拍出来的图像的mark点中心尽量在图像中心,其它8个拍照位置一定要拍出完整的Mark点。

*九点标定行

PxRow:=[23.5, 23.5, 23.5, 71.5, 71.5, 71.5, 118.5, 118.5, 118.5]

*九点标定列

PxColunm:=[28.5, 75.5, 122.5, 28.5, 75.5, 122.5, 28.5, 75.5, 122.5]

*机器坐标行

Qx:=[100,50,0,100,50,0,100,50,0]

*机器坐标列

Qy:=[0,0,0,50,50,50,100,100,100]

我们已知什么呢?求什么呢?
求:mar点移动到相机的中心,机械手的位置是多少。我们已知条件是拍照位置及匹配到的mark点坐标。

已知拍照位置x:1300,y:1000,匹配到的mark点row:=240,col=478.6,

标定的核心思想是机械坐标系与相机坐标系的统一,根据产品实时来料位置计算偏差。

坐标系转换:平移,旋转+缩放

以矩阵表达式来计算这些变换时,平移是矩阵相加,旋转和缩放是矩阵相乘。

固定相机:
优势:可以在机器人运动时候拍照,相机连接电缆铺设简单。

劣势:检测区域固定化。如果因 外界因素导致照相机和机器人间相对位置变更,必须重新camera calibration

不固定相机:
优势:检测区域可以随机器人变化,整体检测范围增加。
较大的照相机焦距使用可能,检测精度提升。
容易拓展再检测功能。
劣势,机器人必须停止照相。
必须注意光源是否被机器人或外围设备干涉。必须注意照相机连接电缆的磨损现象。
标定时使用哪个工具坐标系,视觉返回的坐标即基于哪个工具的坐标系。若标定时使用ToolO坐标系,而最终应用时不是Tool0(如抓取或者喷涂),则需要完成Tool0到最终应用的工具的坐标系之间的转换。

视觉返回的是标定时 使用的坐标系。

标定过程:
固定相机:标定板固定再夹具上,机械手带领标定板至一系列不同位置供固定相机拍摄,同时提供拍照点位姿。

运动相机:
标定板放置在某一个位置,机械手带领运动相机至一系列不同位置并拍摄图片,同时提供拍照点位姿。

验证:查看pose值,实际测量粗验证。2.抓取验证。

ImagePointsToWorldPlane算子使用相机内参和测量平面位姿把像素坐标转换到图像坐标(X,Y)

通过AffineTransPoint3D算子使用相机外参把测量平面上的点转换到机器人坐标系。

This example shows how to perform a pick-and-place application with a SCARA robot based on the calibration information determined by a SCARA hand-eye calibration.
这个例子展示了如何基于SCARA手眼校准确定的校准信息,使用SCARA机器人执行拾取和放置的的应用程序。
In a first step, a shape model is defined from a model image.
首先,从模型图片中找一个形状模型。

Then, based on this shape model, the object is searched in each image.
然后,基于这个形状模型,在每个图片中找到这个模型。
然后,基于该形状模型,在每幅图像中搜索目标。
For one selected object, the robot coordinates are calculated that can be used to grasp this object.
为了一个被选择的目标,机器人坐标能被计算,为了抓取这个目标。
对于一个选定的对象,计算可用于抓取该对象的机器人坐标。

To adapt this example for real applications, the images must be acquired from a camera (instead of read from file) and the control of the robot must be implemented (instead of the respective lines in this example that are commented out).

为了接近这个例子达到真实的应用,图片必须从相机获取,而不能从文件中读取图片,并且必须通过接口控制机器人,而不是用这个例子中建议的画一条线。
为了使这个例子实用与实际情况,图像必须从相机中获取(而不是从文件中读取),并且机器人的控制必须被实现(而不是在这个例子中,注释掉相应行)

Typically, the images must be rectified before the matching.
很显然,在匹配之前,图像必须被对应。
通常,在匹配之前,必须对图像校准。
This step may only be omitted if the camera looks exactly orthogonal onto the measurement plane.
这步通常被省略,如果相机
只有当相机看起来与测量平面完全正交时,可以省略此步骤

To run the example program with the provided example images, RectifyImages must be set to true.

为了用提供例子的图片运行这个例子,RectifyImages 必须设置为true

Otherwise, some objects will not be found by the matching because of the perspective distortions.
因此,因为曲度失真,通过匹配讲找不到一些目标。

Otherwise, some objects will not be found by the matching because of the perspective distortions.
否则,由于透视扭曲,匹配将找不到某些对象

  • This example shows how to perform a pick-and-place application with
  • a SCARA robot based on the calibration information determined by
  • a SCARA hand-eye calibration. In a first step, a shape model is defined
  • from a model image. Then, based on this shape model, the object is searched
  • in each image. For one selected object, the robot coordinates are
  • calculated that can be used to grasp this object.
  • To adapt this example for real applications, the images must be acquired
  • from a camera (instead of read from file) and the control of the robot
  • must be implemented (instead of the respective lines in this example that
  • are commented out).
  • Typically, the images must be rectified before the matching. This step
  • may only be omitted if the camera looks exactly orthogonal onto the
  • measurement plane. To run the example program with the provided example
  • images, RectifyImages must be set to true. Otherwise, some objects will
  • not be found by the matching because of the perspective distortions.
    RectifyImages := true
  • Read in the calibration information provided by one of the HDevelop
  • example programs calibrate_hand_eye_scara_stationary_cam.hdev
  • or calibrate_hand_eye_scara_stationary_cam_approx.hdev.
    try
    • Read the result of the hand eye calibration
      read_pose (‘cam_in_base_pose.dat’, CamInBasePose)
    • Read the data required for the pose estimation of the objects to be grasped
      read_cam_par (‘camera_parameters.dat’, CameraParam)
      read_pose (‘measurement_plane_in_cam_pose.dat’, MPInCamPose)
      catch (Exception)
    • The calibration information is not yet available, use standard calibration
    • information instead. To provide the calibration information on file, run
    • one of the HDevelop example programs calibrate_hand_eye_scara_stationary_cam.hdev
    • or calibrate_hand_eye_scara_stationary_cam_approx.hdev.
      create_pose (0.0559, 0.195, 0.4803, 180.0982, 29.8559, 179.9439, ‘Rp+T’, ‘gba’, ‘point’, CamInBasePose)
      gen_cam_par_area_scan_division (0.0165251, -642.277, 4.65521e-006, 4.65e-006, 595.817, 521.75, 1280, 1024, CameraParam)
      create_pose (0.0046, -0.0029, 0.4089, 359.7866, 29.732, 0.2295, ‘Rp+T’, ‘gba’, ‘point’, MPInCamPose)
      endtry
  • Prepare the rectification map to eliminate the perspective
  • distortions of the images
    if (RectifyImages)
    prepare_rectification_map (Map, CameraParam, MPInCamPose, MappingScale, MPInCamPoseMapping)
    endif

dev_update_off ()
set_system (‘border_shape_models’, ‘true’)
*

  • Here, the connection to the robot should be established and
  • the robot should be moved to a defined standby pose that
  • allows to take an unoccluded image of the measurement plane.
  • Define a shape model of the object to be grasped
    • Acquire an image for model generation
      read_image (Image, ‘3d_machine_vision/hand_eye/scara_stationary_cam_setup_01_metal_parts_01’)
      if (RectifyImages)
      map_image (Image, Map, ModelImage)
      else
      copy_image (Image, ModelImage)
      endif

dev_close_window ()
dev_open_window_fit_image (ModelImage, 0, 0, 600, 600, WindowHandle)
set_display_font (WindowHandle, 16, ‘mono’, ‘true’, ‘false’)
dev_clear_window ()
dev_display (ModelImage)
dev_set_line_width (2)

    • Create the shape model
      gen_rectangle1 (ROI, 400, 300, 1100, 1300)
      gauss_filter (ModelImage, ImageGauss, 5)
      reduce_domain (ImageGauss, ROI, ImageReduced)
      get_domain (ImageReduced, ModelROI)
      create_shape_model (ImageReduced, ‘auto’, rad(0), rad(360), ‘auto’, ‘auto’, ‘use_polarity’, [10,50], ‘auto’, ModelID)
      area_center (ModelROI, ModelROIArea, ModelROIRow, ModelROIColumn)
      dev_display_shape_matching_results (ModelID, ‘green’, ModelROIRow, ModelROIColumn, 0, 1, 1, 0)
    • Define the grasping point on the object either by indicating
  • it in the image (only if the object can be picked up by the tool in any orientation)
  • or by grasping it with the robot and registering the respective robot pose
    DefineGraspingPointByRobot := true
    if (DefineGraspingPointByRobot)
    dev_set_colored (12)
    create_pose (0.2592, 0.1997, 0.1224, 0, 0, 1.2572, ‘Rp+T’, ‘gba’, ‘point’, GraspingPointModelInBasePose)
    pose_invert (CamInBasePose, BaseInCamPose)
    pose_to_hom_mat3d (BaseInCamPose, BaseInCamHomMat3D)
    affine_trans_point_3d (BaseInCamHomMat3D, GraspingPointModelInBasePose[0], GraspingPointModelInBasePose[1], GraspingPointModelInBasePose[2], Qx, Qy, Qz)
    project_3d_point (Qx, Qy, Qz, CameraParam, GraspingPointModelRow, GraspingPointModelColumn)
    GraspingPointModelAngle := GraspingPointModelInBasePose[5]
    if (RectifyImages)
    * Calculate rectified image coordinates
    image_points_to_world_plane (CameraParam, MPInCamPoseMapping, GraspingPointModelRow, GraspingPointModelColumn, MappingScale, GraspingPointModelColumn, GraspingPointModelRow)
    * Display grasping pose in rectified model image
    get_image_size (ModelImage, WidthM, HeightM)
    gen_cam_par_area_scan_telecentric_division (1.0, 0, MappingScale, MappingScale, 0, 0, WidthM, HeightM, CamParamRect)
    GraspingPointModelXMP := GraspingPointModelColumn * MappingScale
    GraspingPointModelYMP := GraspingPointModelRow * MappingScale
    create_pose (GraspingPointModelXMP, GraspingPointModelYMP, 0, 0, 0, GraspingPointModelAngle, ‘Rp+T’, ‘gba’, ‘point’, PoseCoordSystemVis)
    disp_3d_coord_system (WindowHandle, CamParamRect, PoseCoordSystemVis, 0.02)
    else
    * Display grasping pose in original model image
    pose_invert (CamInBasePose, BaseInCamPose)
    pose_compose (BaseInCamPose, GraspingPointModelInBasePose, PoseCoordSystemVis)
    disp_3d_coord_system (WindowHandle, CameraParam, PoseCoordSystemVis, 0.02)
    endif
    disp_message (WindowHandle, ‘Model contours and grasping pose’, ‘window’, 12, 12, ‘black’, ‘true’)
    else
    binary_threshold (ImageReduced, Region, ‘max_separability’, ‘light’, UsedThreshold)
    fill_up (Region, RegionFillUp)
    erosion_rectangle1 (RegionFillUp, RegionErosion, 160, 1)
    smallest_rectangle2 (RegionErosion, GraspingPointModelRow, GraspingPointModelColumn, Phi, Length1, Length2)
    gen_cross_contour_xld (GraspingPointModel, GraspingPointModelRow, GraspingPointModelColumn, 25, 0.785398)
    dev_set_color (‘yellow’)
    dev_display (GraspingPointModel)
    disp_message (WindowHandle, ‘Model contours and grasping point’, ‘window’, 12, 12, ‘black’, ‘true’)
    GraspingPointModelAngle := 0
    endif
    set_shape_model_origin (ModelID, GraspingPointModelRow - ModelROIRow, GraspingPointModelColumn - ModelROIColumn)
    disp_continue_message (WindowHandle, ‘black’, ‘true’)
    stop ()
  • Loop over the images of objects to be grasped by the robot
    if (RectifyImages)
    pose_to_hom_mat3d (MPInCamPoseMapping, MPInCamHomMat3DMapping)
    endif
    for ImageIdx := 2 to 6 by 1
    • Acquire next image
      read_image (Image, ‘3d_machine_vision/hand_eye/scara_stationary_cam_setup_01_metal_parts_’ + ImageIdx$‘02d’)
    • Rectify the image to allow the use of standard shape based matching
    • for the search for instances of the object
      if (RectifyImages)
      map_image (Image, Map, SearchImage)
      else
      copy_image (Image, SearchImage)
      endif
      dev_clear_window ()
      dev_display (SearchImage)
    • Find instances of the object
      find_shape_model (SearchImage, ModelID, rad(0), rad(360), 0.5, 0, 0.5, ‘least_squares’, [0,3], 0.9, Row, Column, Angle, Score)
      if (|Row| < 1)
      disp_message (WindowHandle, ‘No objects found’, ‘window’, 12, 12, ‘black’, ‘true’)
      continue
      endif
    • Select one specific instance (here: the leftmost)
      LeftmostIdx := sort_index(Column)[0]
      GraspingPointRow := Row[LeftmostIdx]
      GraspingPointColumn := Column[LeftmostIdx]
      GraspingPointAngle := Angle[LeftmostIdx]
    • Display matching results and indicate object to be grasped
      dev_display_shape_matching_results (ModelID, ‘blue’, Row, Column, Angle, 1, 1, 0)
      dev_display_shape_matching_results (ModelID, ‘green’, GraspingPointRow, GraspingPointColumn, GraspingPointAngle, 1, 1, 0)
      disp_message (WindowHandle, |Row| + ’ objects found (Green: Object to be grasped)’, ‘window’, 12, 12, ‘black’, ‘true’)
      disp_continue_message (WindowHandle, ‘black’, ‘true’)
      stop ()
    • Calculate point to approach
      if (RectifyImages)
      calculate_point_to_approach_scara_stationary (GraspingPointRow, GraspingPointColumn, GraspingPointAngle + rad(GraspingPointModelAngle), RectifyImages, MappingScale, MPInCamHomMat3DMapping, [], [], CamInBasePose, ObjInBasePose)
      else
      calculate_point_to_approach_scara_stationary (GraspingPointRow, GraspingPointColumn, GraspingPointAngle + rad(GraspingPointModelAngle), RectifyImages, [], [], CameraParam, MPInCamPose, CamInBasePose, ObjInBasePose)
      endif
    • Display object to be grasped together with the grasping point
      dev_clear_window ()
      dev_display (SearchImage)
      dev_display_shape_matching_results (ModelID, ‘green’, GraspingPointRow, GraspingPointColumn, GraspingPointAngle, 1, 1, 0)
    dev_set_colored (12)
    if (RectifyImages)
    get_image_size (SearchImage, Width, Height)
    gen_cam_par_area_scan_telecentric_division (1.0, 0, MappingScale, MappingScale, 0, 0, Width, Height, CamParamRect)
    GraspingPointXMP := GraspingPointColumn * MappingScale
    GraspingPointYMP := GraspingPointRow * MappingScale
    create_pose (GraspingPointXMP, GraspingPointYMP, 0, 0, 0, -deg(GraspingPointAngle) + GraspingPointModelAngle, ‘Rp+T’, ‘gba’, ‘point’, PoseCoordSystemVis)
    disp_3d_coord_system (WindowHandle, CamParamRect, PoseCoordSystemVis, 0.02)
    else
    pose_invert (CamInBasePose, BaseInCamPose)
    pose_compose (BaseInCamPose, ObjInBasePose, PoseCoordSystemVis)
    disp_3d_coord_system (WindowHandle, CameraParam, PoseCoordSystemVis, 0.02)
    endif
    *
    disp_message (WindowHandle, ‘Press F5 to pick and place indicated object’, ‘window’, 12, 12, ‘black’, ‘true’)
    disp_message (WindowHandle, [‘ObjInBasePose:’,‘Tx: ‘,‘Ty: ‘,‘Tz: ‘,‘Alpha: ‘,‘Beta: ‘,‘Gamma: ‘] + [’’,ObjInBasePose[0:5]$’.3f’ + [’ m’,’ m’,’ m’,’ deg’,’ deg’,’ deg’]], ‘window’, 305, 12, ‘black’, ‘true’)
    disp_continue_message (WindowHandle, ‘black’, ‘true’)
    stop ()
    • Convert translation part of the pose into mm, if necessary
      ToolInBasePoseMM := [ObjInBasePose[0:2] * 1000,ObjInBasePose[3:6]]
    • Pick and place the object
    • Here, the robot should be moved to the above determined pose of the
    • object (ToolInBasePoseMM), where the object should be picked and
    • then placed at some predefined position (something like
    • PlacePositionInBasePoseMM). Finally, the robot should be moved again
    • to the standby pose that allows to take an unoccluded image of the
    • measurement plane.
      endfor
      disp_end_of_program_message (WindowHandle, ‘black’, ‘true’)
  • Here, the connection to the robot should be closed.
  • Reset system parameters
    set_system (‘border_shape_models’, ‘false’)
  • 1
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

三只可爱猫

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值