D455 如何同时传输视频深度流和惯性单元IMU流?(双管道方法与调用回调方法)

【D455】【python】How to get color_stream\depth_steam\accel_stream\gyro_stream at the same time?

MartyG-RealSense commented yesterday •
Hi @Dontla When streaming depth, color and IMU at the same time, there is sometimes a problem where the RGB stream becomes No Frames Received or more rarely, the IMU frames stop being received. To deal with this issue, the recommended solutions are to either use two separate pipelines (IMU alone on one pipeline and depth / color on the other pipeline), or to use callbacks.

For the separate pipeline approach, I recommend the Python script in the link below…

#5628 (comment)

For the callback approach in Python, the link below is a good reference.

#5417

双管道方法

# -*- coding: utf-8 -*-
"""
@File    : D455_double_pipeline.py
@Time    : 2020/12/19 16:07
@Author  : Dontla
@Email   : sxana@qq.com
@Software: PyCharm
"""
## License: Apache 2.0. See LICENSE file in root directory.
## Parts of this code are
## Copyright(c) 2015-2017 Intel Corporation. All Rights Reserved.

##################################################
##      configurable realsense viewer           ##
##################################################

import pyrealsense2 as rs
import numpy as np
import cv2
import time

#
# NOTE: it appears that imu, rgb and depth cannot all be running simultaneously.
#       Any two of those 3 are fine, but not all three: causes timeout on wait_for_frames()
#
device_id = None  # "923322071108" # serial number of device to use or None to use default
enable_imu = True
enable_rgb = True
enable_depth = True
# TODO: enable_pose
# TODO: enable_ir_stereo


# Configure streams
if enable_imu:
    imu_pipeline = rs.pipeline()
    imu_config = rs.config()
    if None != device_id:
        imu_config.enable_device(device_id)
    imu_config.enable_stream(rs.stream.accel, rs.format.motion_xyz32f, 63)  # acceleration
    imu_config.enable_stream(rs.stream.gyro, rs.format.motion_xyz32f, 200)  # gyroscope
    imu_profile = imu_pipeline.start(imu_config)

if enable_depth or enable_rgb:
    pipeline = rs.pipeline()
    config = rs.config()

    # if we are provided with a specific device, then enable it
    if None != device_id:
        config.enable_device(device_id)

    if enable_depth:
        config.enable_stream(rs.stream.depth, 848, 480, rs.format.z16, 60)  # depth

    if enable_rgb:
        config.enable_stream(rs.stream.color, 424, 240, rs.format.bgr8, 60)  # rgb

    # Start streaming
    profile = pipeline.start(config)

    # Getting the depth sensor's depth scale (see rs-align example for explanation)
    if enable_depth:
        depth_sensor = profile.get_device().first_depth_sensor()
        depth_scale = depth_sensor.get_depth_scale()
        print("Depth Scale is: ", depth_scale)
        if enable_depth:
            # Create an align object
            # rs.align allows us to perform alignment of depth frames to others frames
            # The "align_to" is the stream type to which we plan to align depth frames.
            align_to = rs.stream.color
            align = rs.align(align_to)

try:
    frame_count = 0
    start_time = time.time()
    frame_time = start_time
    while True:
        last_time = frame_time
        frame_time = time.time() - start_time
        frame_count += 1

        #
        # get the frames
        #
        if enable_rgb or enable_depth:
            frames = pipeline.wait_for_frames(200 if (frame_count > 1) else 10000)  # wait 10 seconds for first frame

        if enable_imu:
            imu_frames = imu_pipeline.wait_for_frames(200 if (frame_count > 1) else 10000)

        if enable_rgb or enable_depth:
            # Align the depth frame to color frame
            aligned_frames = align.process(frames) if enable_depth and enable_rgb else None
            depth_frame = aligned_frames.get_depth_frame() if aligned_frames is not None else frames.get_depth_frame()
            color_frame = aligned_frames.get_color_frame() if aligned_frames is not None else frames.get_color_frame()

            # Convert images to numpy arrays
            depth_image = np.asanyarray(depth_frame.get_data()) if enable_depth else None
            color_image = np.asanyarray(color_frame.get_data()) if enable_rgb else None

            # Apply colormap on depth image (image must be converted to 8-bit per pixel first)
            depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03),
                                               cv2.COLORMAP_JET) if enable_depth else None

            # Stack both images horizontally
            images = None
            if enable_rgb:
                images = np.hstack((color_image, depth_colormap)) if enable_depth else color_image
            elif enable_depth:
                images = depth_colormap

            # Show images
            cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)
            if images is not None:
                cv2.imshow('RealSense', images)

        if enable_imu:
            accel_frame = imu_frames.first_or_default(rs.stream.accel, rs.format.motion_xyz32f)
            gyro_frame = imu_frames.first_or_default(rs.stream.gyro, rs.format.motion_xyz32f)
            print("imu frame {} in {} seconds: \n\taccel = {}, \n\tgyro = {}".format(str(frame_count),
                                                                                     str(frame_time - last_time), str(
                    accel_frame.as_motion_frame().get_motion_data()), str(
                    gyro_frame.as_motion_frame().get_motion_data())))

        # Press esc or 'q' to close the image window
        key = cv2.waitKey(1)
        if key & 0xFF == ord('q') or key == 27:
            cv2.destroyAllWindows()
            break

finally:
    # Stop streaming
    pipeline.stop()

不错的,能正常运行
在这里插入图片描述

回调callback方法

代码运行不成功,还没解决咋弄

# -*- coding: utf-8 -*-
"""
@File    : D455_callback.py
@Time    : 2020/12/25 11:30
@Author  : Dontla
@Email   : sxana@qq.com
@Software: PyCharm
"""
import pyrealsense2 as rs

pipeline = rs.pipeline()


def test_callback(fs):
    global frameset
    frameset = fs
    pipeline.stop()


profile = pipeline.start(test_callback)
# print(profile)
# <pyrealsense2.pyrealsense2.pipeline_profile object at 0x000001FF99DE3490>

profile.get_streams()
# print(stream)
# [<pyrealsense2.video_stream_profile: 1(0) 848x480 @ 30fps 1>, <pyrealsense2.video_stream_profile: 2(0) 1280x720 @ 30fps 5>, <pyrealsense2.stream_profile: 5(0) @ 200fps 16>, <pyrealsense2.stream_profile: 6(0) @ 63fps 16>]

frameset

D:\20191031_tensorflow_yolov3\python\python.exe C:/Users/Administrator/Desktop/test_D455/D455_callback.py
Traceback (most recent call last):
  File "C:/Users/Administrator/Desktop/test_D455/D455_callback.py", line 28, in <module>
    frameset
NameError: name 'frameset' is not defined

Process finished with exit code 1

评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Dontla

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值