PyRealsense开发心得总结(VScode+pyrs+PyQt5+OpenCV+Open3d)


前言

记录开发pyrs过程常用的代码结构,包括基础的pipeline和frame处理,结合PyQt和OpenCV开发交互界面显示彩色图像和深度图像,基于PyQt和Open3d开发点云显示,以及开发c++的函数库和应用。
如果未特别标明系统环境,则默认是在Windows环境下开发。
本文在作者毕业前长期更新,面向萌新,希望抛砖引玉,欢迎大佬批评指正。


一、代码环境

绝大部分代码是在VScode中完成,由于对VScode使用不精,少部分代码借鉴他人在Ubuntu18.04使用Cmake进行编译。

Windows10下:
VScode(C++)设置参考https://blog.csdn.net/qq_43041976/article/details/100542557
VScode配置Anaconda(Python)参考https://blog.csdn.net/zzly1126/article/details/84927785,注意Pyqt5必须使用版本大于3.5的Python
VScode配置PyQt5参考https://blog.csdn.net/weixin_40014984/article/details/104531359

Linux下:
由于需求和环境千差万别,还请灵活使用百度选择合适自己的搭配方案,自行研究(逃
以供参考:
linux安装anacondahttps://blog.csdn.net/qq_53564294/article/details/120535377


二、VScode使用技巧

1.调试

VScode调试python代码时在launch.json中添加参数(args)(c++添加参数同理):https://blog.csdn.net/zk0272/article/details/83105574

2.VScode使用Git连接Gitee和GitHub

参考:https://blog.csdn.net/qq_38981614/article/details/115013188

git教程可以参考廖雪峰的网站:https://www.liaoxuefeng.com/wiki/896043488029600

3.VScode常用快捷键

Windows与Ubuntu通用常用指令:
Ctrl+K Ctrl+C 快速注释
Ctrl+K Ctrl+U 快速取消注释
Ctrl+K Ctrl+F 快速整理代码格式(需要IntelliCode)
F1 万能键,打开命令面板
Ctrl+F 查找

更多可以参考https://www.cnblogs.com/schut/p/10461840.html

4.VScode添加右键菜单

1)在桌面鼠标右键新建一个文本文件(txt文件),双击打开复制下面的代码

Windows Registry Editor Version 5.00

[HKEY_CLASSES_ROOT\*\shell\VSCode]
@="Open with Code"
"Icon"="C:\\Program Files\\Microsoft VS Code\\Code.exe"

[HKEY_CLASSES_ROOT\*\shell\VSCode\command]
@="\"C:\\Program Files\\Microsoft VS Code\\Code.exe\" \"%1\""

Windows Registry Editor Version 5.00

[HKEY_CLASSES_ROOT\Directory\shell\VSCode]
@="Open with Code"
"Icon"="C:\\Program Files\\Microsoft VS Code\\Code.exe"

[HKEY_CLASSES_ROOT\Directory\shell\VSCode\command]
@="\"C:\\Program Files\\Microsoft VS Code\\Code.exe\" \"%V\""

Windows Registry Editor Version 5.00

[HKEY_CLASSES_ROOT\Directory\Background\shell\VSCode]
@="Open with Code"
"Icon"="C:\\Program Files\\Microsoft VS Code\\Code.exe"

[HKEY_CLASSES_ROOT\Directory\Background\shell\VSCode\command]
@="\"C:\\Program Files\\Microsoft VS Code\\Code.exe\" \"%V\""

2)将里面所有VScode的路径改成自己的VScode安装目录(注意路径中的双反斜杠),保存,然后将这个文件后缀改为: .reg,文件名任意,例如:0.reg
3)双击运行,全部选是

参考:https://yiliang.blog.csdn.net/article/details/82194913

三、PyRealsense与OpenCV

PyRealsense官方网站:https://github.com/IntelRealSense/librealsense/tree/master/wrappers/python
这里面有PyRealsense的安装以及一些examples,方便入手

要用到OpenCV-python和Numpy,请提前安装

pip install opencv-python==3.4.2.17
pip install numpy

想了解OpenCV的详细安装过程以及一些常用操作可以参考:https://blog.csdn.net/qq_45066628/article/details/119342697

1.显示彩色图像和深度图像(colormap)

## License: Apache 2.0. See LICENSE file in root directory.
## Copyright(c) 2015-2017 Intel Corporation. All Rights Reserved.

###############################################
##      Open CV and Numpy integration        ##
###############################################

import pyrealsense2 as rs
import numpy as np
import cv2

# Configure depth and color streams
pipeline = rs.pipeline()  # 该管道简化了用户与设备和计算机视觉处理模块的交互。
config = rs.config()  # 配置允许管道用户请求管道流和设备选择和配置的过滤器。

# Get device product line for setting a supporting resolution
pipeline_wrapper = rs.pipeline_wrapper(pipeline)
pipeline_profile = config.resolve(pipeline_wrapper)
device = pipeline_profile.get_device()
device_product_line = str(device.get_info(rs.camera_info.product_line))# 提取设备产品线信息

# 判断是否有RGB camera 
found_rgb = False
for s in device.sensors:
    if s.get_info(rs.camera_info.name) == 'RGB Camera':
        found_rgb = True
        break
if not found_rgb:
    print("The demo requires Depth camera with Color sensor")
    exit(0)

config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)# 启动深度图像流

if device_product_line == 'L500':
    config.enable_stream(rs.stream.color, 960, 540, rs.format.bgr8, 30)
else:
    config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)# 启动彩色图像流,注意分辨率为640*480与深度图保持一致,注意是bgr8

# Start streaming
pipeline.start(config)
try:
    while True:
        # Wait for a coherent pair of frames: depth and color
        frames = pipeline.wait_for_frames()
        depth_frame = frames.get_depth_frame() # 变量类型为pyrealsense2.depth_frame
        color_frame = frames.get_color_frame() # 变量类型为pyrealsense2.video_frame,get_infrared_frame获得的变量类型也是pyrealsense2.video_frame
        if not depth_frame or not color_frame:
            continue
            
        # Convert images to numpy arrays
        depth_image = np.asanyarray(depth_frame.get_data())
        color_image = np.asanyarray(color_frame.get_data())
        # Apply colormap on depth image (image must be converted to 8-bit per pixel first)
        depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(
            depth_image, alpha=0.03), cv2.COLORMAP_JET)

        depth_colormap_dim = depth_colormap.shape
        color_colormap_dim = color_image.shape

        # If depth and color resolutions are different, resize color image to match depth image for display
        if depth_colormap_dim != color_colormap_dim:
            resized_color_image = cv2.resize(color_image, dsize=(
                depth_colormap_dim[1], depth_colormap_dim[0]), interpolation=cv2.INTER_AREA)
            images = np.hstack((resized_color_image, depth_colormap))
        else:
            # images = np.vstack((color_image, depth_colormap))# 将彩色图和深度图沿着第一轴(高度)进行连接
            images = np.hstack((color_image, depth_colormap))# 将彩色图和深度图沿着第二轴(宽度)进行连接
            # images = np.dstack((color_image, depth_colormap))# 将彩色图和深度图沿着第三轴(rgb通道)进行连接(此处无效)
            
        # Show images
        cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)
        cv2.imshow('RealSense', images)
        cv2.waitKey(1)

finally:
    # Stop streaming
    pipeline.stop()

对pipeline.start(config)的补充说明
根据配置启动管道流。 管道流以循环的方式从设备中捕获样本,并根据每个模块的需求和线程模型将它们发送到附加的计算机视觉模块和处理块。在循环执行期间,应用程序可以通过调用wait_for_frames()或poll_for_frames()来访问摄像机流。流循环一直运行,直到管道停止。 只有当管道未启动时,才可能启动它。 如果管道已启动,则会引发一个异常。管道在启动时根据配置或默认配置选择并激活设备。 当rs2::config被提供给该方法时,管道会尝试激活config.resolve()结果。如果应用程序请求与管道计算机视觉模块冲突,或者平台上没有匹配的设备,则该方法失败。在config.resolve()调用和管道启动之间,可用的配置和设备可能会发生变化,如果设备被连接或断开,或者另一个应用程序获得了设备的所有权。
该函数返回一个pyrealsense2.pipeline_profile

两种获取帧(frame)的方式,帧的格式:pyrealsense2.composite_frame
1)poll_for_frames()详解
检查是否有一组新的帧可用,并检索最新未交付的帧集。 帧集包括管道中每个启用的流的时间同步帧。该方法返回时不会阻塞调用线程,并且返回新帧的状态是否可用。 如果可用,则获取最新的帧集。 在函数未被调用时产生的设备帧被删除。为了避免帧下降,调用该方法的速度应该与设备的帧速率一样快。 应用程序可以维护框架句柄以延迟处理。但是,如果应用程序的历史保存时间过长,设备可能会缺少内存资源来产生新的帧,接下来调用这个方法将不会返回新的帧,直到资源可用为止。
2)wait_for_frames()详解
等待一组新的帧可用。 帧集包括管道中每个启用的流的时间同步帧。在流的帧率不同的情况下,帧集包含慢流的匹配帧,该帧可能已经包含在前面的帧集中。该方法阻塞调用线程,并获取最新的未读帧集。 在函数未被调用时产生的设备帧被删除。为了避免帧下降,调用该方法的速度应该与设备的帧速率一样快。 应用程序可以维护框架句柄以延迟处理。但是,如果应用程序的历史保存时间过长,设备可能会缺少产生新帧的内存资源,接下来调用这个方法将无法获取新帧,直到资源可用为止。

获取图像的简要代码块如下:

pipeline = rs.pipeline()  # 该管道简化了用户与设备和计算机视觉处理模块的交互。
config = rs.config()  # 配置允许管道用户请求管道流和设备选择和配置的过滤器。
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)# 启动深度图像流
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)# 启动彩色图像流,注意分辨率为640*480与深度图保持一致,注意是bgr8
pipeline.start(config) # Start streaming
try:
    while True:
        # Wait for a coherent pair of frames: depth and color
        frames = pipeline.wait_for_frames()
        depth_frame = frames.get_depth_frame() # 变量类型为pyrealsense2.depth_frame
        color_frame = frames.get_color_frame() # 变量类型为pyrealsense2.video_frame
        # Convert images to numpy arrays
        depth_image = np.asanyarray(depth_frame.get_data())
        color_image = np.asanyarray(color_frame.get_data())
        # Apply colormap on depth image (image must be converted to 8-bit per pixel first)
        depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)
        images = np.hstack((color_image, depth_colormap))# 将彩色图和深度图沿着第二轴(宽度)进行连接
        # Show images
        cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)
        cv2.imshow('RealSense', images)
        cv2.waitKey(1)
finally:
    pipeline.stop() # Stop streaming

需要注意的是,在ubuntu环境下使用VScode运行这段代码可能会遇到qt.qpa.plugin: Could not find the Qt platform plugin “xcb” 的问题,这是vscode和imshow出现了兼容问题
为了查看更具体的原因,需要在Terminal中输入

export QT_DEBUG_PLUGINS=1

然后再运行一次原代码,得到的报错信息中查看具体的问题,包括是否缺失库文件或者是远程连接没有正确设置显示器
可以在代码中添加

if platform.system()=="Linux":
    envpath = '/home/lirq/anaconda3/envs/Pyrealsense-py3-8/lib/python3.8/site-packages/cv2/qt/plugins/platforms'
    os.environ['QT_QPA_PLATFORM_PLUGIN_PATH'] = envpath

其中envpath换成自己的anaconda环境中qt的platforms文件夹路径即可
作者遇到的问题是could not connect to display:0,可以通过重新安装opencv解决问题

pip uninstall opencv-python
pip install opencv-contrib-python
sudo apt-get install opencv-python

重启Ubuntu之后就可以顺利使用VSCode进行imshow

2.多相机管理device_manager

import pyrealsense2 as rs   
import cv2 
import numpy as np
from Calibration.realsense_device_manager import DeviceManager # 多相机管理
# Define some constants 
resolution_width = 1280 # pixels
resolution_height = 720 # pixels
frame_rate = 30  # fps
dispose_frames_for_stablisation = 30  # frames the auto-exposure controller to stablise
cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE) #窗口设定
try:
    # Enable the streams from all the intel realsense devices
    rs_config = rs.config()
    rs_config.enable_stream(rs.stream.depth, resolution_width, resolution_height, rs.format.z16, frame_rate)
    rs_config.enable_stream(rs.stream.infrared, 1, resolution_width, resolution_height, rs.format.y8, frame_rate)
    rs_config.enable_stream(rs.stream.color, resolution_width, resolution_height, rs.format.bgr8, frame_rate)

    # Use the device manager class to enable the devices and get the frames
    device_manager = DeviceManager(rs.context(), rs_config)
    device_manager.enable_all_devices()
    
    # Allow some frames for the auto-exposure controller to stablise
    for frame in range(dispose_frames_for_stablisation):
        frames = device_manager.poll_frames()

    assert( len(device_manager._available_devices) > 0 )

    # Continue acquisition until terminated with Ctrl+C by the user
    while 1:
        '''
        #################原版##################
        # Get the frames from all the devices
        frames_devices = device_manager.poll_frames()

        # TODO: 对每个相机流进行操作
        for (device_info, frame) in frames_devices.items():
            #dev_info = (serial, device.product_line)
            device = device_info[0] #serial number 
            color_image = np.asarray(frame[rs.stream.color].get_data())#frame[rs.stream.color]的类型为pyrealsense2.video_frame
            # Visualise the results
            cv2.imshow('Color image from RealSense Device Nr: ' + device, color_image)
            cv2.waitKey(1)
        #################原版##################
        '''
        # 对每个相机流进行操作
        for (serial, device) in device_manager._enabled_devices.items():#device_manager._enabled_devices[device_serial] = (Device(pipeline, pipeline_profile, product_line))
        # device.pipeline与pipeline = rs.pipeline()等价
        # serial: 相机编号
        # device: Device(pipeline, pipeline_profile, product_line) #见realSense_device_manager
            # TODO:对每个相机流进行操作 
            streams = device.pipeline_profile.get_streams()
            frames = device.pipeline.wait_for_frames() #frameset will be a pyrealsense2.composite_frame object
            color_frame = frames.get_color_frame() 
            color_image = np.asanyarray(color_frame.get_data())
            # Visualise the results
            cv2.imshow('Color image from RealSense Device Nr: ' + serial, color_image)
            cv2.waitKey(1)
        
except KeyboardInterrupt:
    print("The program was interupted by the user. Closing the program...")

finally:
    device_manager.disable_streams()
    cv2.destroyAllWindows()

与原版相比,新版API与第一小节更加统一,需要注意的是,device_manager与多线程thread在Ubuntu下出现过不兼容的情况。
多相机管理核心代码如下:

from Calibration.realsense_device_manager import DeviceManager # 多相机管理
try:
    # Enable the streams from all the intel realsense devices
    rs_config = rs.config()
    rs_config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
    rs_config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)

    # Use the device manager class to enable the devices and get the frames
    device_manager = DeviceManager(rs.context(), rs_config)
    device_manager.enable_all_devices()
    assert( len(device_manager._available_devices) > 0 )

    # Continue acquisition until terminated with Ctrl+C by the user
    while 1:
        # 对每个相机流进行操作
        for (serial, device) in device_manager._enabled_devices.items()
            # TODO:对每个相机流分别进行操作 
            streams = device.pipeline_profile.get_streams()
            frames = device.pipeline.wait_for_frames() #frameset will be a pyrealsense2.composite_frame object
            color_frame = frames.get_color_frame() 
            color_image = np.asanyarray(color_frame.get_data())
            # Visualise the results
            cv2.imshow('Color image from RealSense Device Nr: ' + serial, color_image)
            cv2.waitKey(1)
except KeyboardInterrupt:
    print("The program was interupted by the user. Closing the program...")
finally:
    device_manager.disable_streams()
    cv2.destroyAllWindows()

3.OpenCV双目标定图像采集

为了与matlab的标定算法对比,因此要将左右相机的拍摄图像采集到指定文件夹,具体的实现见第五章:线程管理与Pyqt5,此处只列出图像的保存代码

frames1 = pipeline1.wait_for_frames()
aligned_frames1 = align.process(frames1)
color_frame1 = aligned_frames1.get_color_frame()
color_image1 = np.asanyarray(color_frame1.get_data())#获取numpy形式的image以供Opencv输出
MyPath = self.output_path+'/output/'
cv2.imwrite(MyPath+'cam1_color_' +str(i)+'.png', color_image1)

4.OpenCV双目视觉检测

四、PyRealsense与Open3d

由于PCL安装过于繁琐,而无论是Open3d-python还是Open3d(C++)安装都较为方便,因此选用Open3d
Open3d官方网站:http://www.open3d.org/docs/release/introduction.html

1.显示3d点云

import open3d as o3d
import numpy as np

pcd = o3d.io.read_point_cloud("/home/xxxx/0.ply")
pcd.transform([[1,0,0,0],[0,-1,0,0],[0,0,-1,0],[0,0,0,1]])#需要转置为相机视角
#点云显示
o3d.visualization.draw_geometries([pcd])

2.获得像素点位置的距离和三维坐标

# 获取像素点的距离和三维坐标,输出单位米
def get_3d_camera_coordinate(depth_pixel, aligned_depth_frame, depth_intrin):
    x = depth_pixel[0]
    y = depth_pixel[1]
    dis = aligned_depth_frame.get_distance(x, y)        # 获取该像素点对应的深度
    # print ('depth: ',dis)       # 深度单位是m
    camera_coordinate = rs.rs2_deproject_pixel_to_point(depth_intrin, depth_pixel, dis)
    # print ('camera_coordinate: ',camera_coordinate)
    return dis, camera_coordinate
# 获取随机点三维坐标(点云方法)
def get_3d_camera_coordinate_pc(depth_pixel, color_frame, depth_frame):
    x = depth_pixel[0]
    y = depth_pixel[1]
    pc = rs.pointcloud()        # 声明点云对象
    points = rs.points()

    ###### 计算点云 #####
    pc.map_to(color_frame)
    points = pc.calculate(depth_frame)
    vtx = np.asanyarray(points.get_vertices())
    #  print ('vtx_before_reshape: ', vtx.shape)        # 307200
    vtx = np.reshape(vtx,(480, 640, -1))   
    # print ('vtx_after_reshape: ', vtx.shape)       # (480, 640, 1)

    camera_coordinate = vtx[y][x][0]
    # print ('camera_coordinate: ',camera_coordinate)
    dis = camera_coordinate[2]
    return dis, camera_coordinate

五、PyQt5

有关在VSvode中PyQt5的安装详见:https://blog.csdn.net/weixin_40014984/article/details/104531359
PyQt5中文网:http://www.pyqt5.cn/
参考的超详细基础教程(内含视频链接):https://blog.csdn.net/qq_40243295/article/details/105633221

1.创建基本窗口

在VScode左侧Explore中右键选择PYQT: New Form
为了方便本节选择Dialog,但平时开发最好选择MainWindow
做一个界面后保存为.ui文件,例如:

<?xml version="1.0" encoding="UTF-8"?>
<ui version="4.0">
 <class>Dialog</class>
 <widget class="QDialog" name="Dialog">
  <property name="geometry">
   <rect>
    <x>0</x>
    <y>0</y>
    <width>400</width>
    <height>300</height>
   </rect>
  </property>
  <property name="windowTitle">
   <string>Dialog</string>
  </property>
  <widget class="QDialogButtonBox" name="buttonBox">
   <property name="geometry">
    <rect>
     <x>30</x>
     <y>240</y>
     <width>341</width>
     <height>32</height>
    </rect>
   </property>
   <property name="orientation">
    <enum>Qt::Horizontal</enum>
   </property>
   <property name="standardButtons">
    <set>QDialogButtonBox::Cancel|QDialogButtonBox::Ok</set>
   </property>
  </widget>
  <widget class="QLabel" name="label">
   <property name="geometry">
    <rect>
     <x>110</x>
     <y>100</y>
     <width>54</width>
     <height>12</height>
    </rect>
   </property>
   <property name="text">
    <string>TextLabel</string>
   </property>
  </widget>
 </widget>
 <resources/>
 <connections>
  <connection>
   <sender>buttonBox</sender>
   <signal>accepted()</signal>
   <receiver>Dialog</receiver>
   <slot>accept()</slot>
   <hints>
    <hint type="sourcelabel">
     <x>248</x>
     <y>254</y>
    </hint>
    <hint type="destinationlabel">
     <x>157</x>
     <y>274</y>
    </hint>
   </hints>
  </connection>
  <connection>
   <sender>buttonBox</sender>
   <signal>rejected()</signal>
   <receiver>Dialog</receiver>
   <slot>reject()</slot>
   <hints>
    <hint type="sourcelabel">
     <x>316</x>
     <y>260</y>
    </hint>
    <hint type="destinationlabel">
     <x>286</x>
     <y>274</y>
    </hint>
   </hints>
  </connection>
 </connections>
</ui>

右键选中这个.ui文件,选择PYQT: Compile Form
会得到一个Ui_开头的.py文件,但我们不要修改内容,这个文件只是方便查看各个QT部件的属性,具体的功能关联要另建一个新的.py文件,实现前端与后端独立开发,互不影响
新建一个.py文件:

import sys
from PyQt5.QtWidgets import QApplication, QDialog
from Ui_Untitled import Ui_Dialog # 导入QtDesigner中绘制好的UI
if __name__ == '__main__':
    app = QApplication(sys.argv)
    MainWindow = QDialog()
    ui = Ui_Dialog()
    ui.setupUi(MainWindow)
    MainWindow.show()
    sys.exit(app.exec_())

保存即可直接运行
上面这种只是一个简单的Dialog,但这种形式不方便进行扩展,因此多用MainWindow。

2.基础Pyqt5:Realsense点云显示与保存

Ui_Mainwindow.py和Main.py代码如下(只保留基础代码)

# Ui_Mainwindow.py
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_MainWindow(object):
    def setupUi(self, MainWindow):
        MainWindow.setObjectName("MainWindow")
        MainWindow.resize(1312, 571)
        self.centralwidget = QtWidgets.QWidget(MainWindow)
        self.centralwidget.setObjectName("centralwidget")
        self.pushButton = QtWidgets.QPushButton(self.centralwidget)
        self.pushButton.setGeometry(QtCore.QRect(20, 20, 101, 25))
        self.pushButton.setObjectName("pushButton")
        self.label = QtWidgets.QLabel(self.centralwidget)
        self.label.setGeometry(QtCore.QRect(130, 20, 431, 17))
        self.label.setObjectName("label")
        self.label_show = QtWidgets.QLabel(self.centralwidget)
        self.label_show.setGeometry(QtCore.QRect(20, 60, 1280, 480))
        self.label_show.setObjectName("label_show")
        MainWindow.setCentralWidget(self.centralwidget)
        self.menubar = QtWidgets.QMenuBar(MainWindow)
        self.menubar.setGeometry(QtCore.QRect(0, 0, 1312, 22))
        self.menubar.setObjectName("menubar")
        MainWindow.setMenuBar(self.menubar)
        self.statusbar = QtWidgets.QStatusBar(MainWindow)
        self.statusbar.setObjectName("statusbar")
        MainWindow.setStatusBar(self.statusbar)

        self.retranslateUi(MainWindow)
        QtCore.QMetaObject.connectSlotsByName(MainWindow)

    def retranslateUi(self, MainWindow):
        _translate = QtCore.QCoreApplication.translate
        MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow"))
        self.pushButton.setText(_translate("MainWindow", "获取点云"))
        self.label.setText(_translate("MainWindow", "export path"))
        self.label_show.setText(_translate("MainWindow", "TextLabel"))
# main.py
import sys
from PyQt5.QtWidgets import QApplication, QMainWindow, QMessageBox
from PyQt5.QtGui import QImage, QPixmap
from Ui_UI_GetPointCloud import Ui_MainWindow
from PyQt5.QtCore import QObject, pyqtSignal

import pyrealsense2 as rs
import cv2
import numpy as np
import open3d as o3d

import threading as th
import datetime as dt
import ctypes
import inspect
import re
import os
class MainWindow(QMainWindow, Ui_MainWindow):
    def __init__(self):
        super(MainWindow, self).__init__()

        # Set up the user interface from Designer.
        self.setupUi(self)
        self.pushButton.clicked.connect(self.pushButton_clicked)
        self.dis_update.connect(self.camera_view)

        self.savePointCLoud = False
        self.id = 0
    # 在对应的页面类的内部,与def定义的函数同级
    dis_update = pyqtSignal(QPixmap)

    def pushButton_clicked(self):
        self.savePointCLoud = True

    # 添加一个退出的提示事件
    def closeEvent(self, event):
        """我们创建了一个消息框,上面有俩按钮:Yes和No.第一个字符串显示在消息框的标题栏,第二个字符串显示在对话框,
              第三个参数是消息框的俩按钮,最后一个参数是默认按钮,这个按钮是默认选中的。返回值在变量reply里。"""

        reply = QMessageBox.question(self, 'Message', "Are you sure to quit?",
                                     QMessageBox.Yes | QMessageBox.No, QMessageBox.No)
        # 判断返回值,如果点击的是Yes按钮,我们就关闭组件和应用,否则就忽略关闭事件
        if reply == QMessageBox.Yes:
            self.stop_thread(self.thread_camera)
            event.accept()
        else:
            event.ignore()

    def open_camera(self):
        # target选择开启摄像头的函数
        self.thread_camera = th.Thread(target=self.open_realsense)
        self.thread_camera.start()
        # 多线程有兼容性bug
        print('Open Camera')

    def camera_view(self, c):
        # 调用setPixmap函数设置显示Pixmap
        self.label_show.setPixmap(c)
        # 调用setScaledContents将图像比例化显示在QLabel上
        self.label_show.setScaledContents(True)

    def _async_raise(self, tid, exctype):
        """raises the exception, performs cleanup if needed"""
        tid = ctypes.c_long(tid)
        if not inspect.isclass(exctype):
            exctype = type(exctype)
        res = ctypes.pythonapi.PyThreadState_SetAsyncExc(
            tid, ctypes.py_object(exctype))
        if res == 0:
            raise ValueError("invalid thread id")
        elif res != 1:
            # """if it returns a number greater than one, you're in trouble,
            # and you should call it again with exc=NULL to revert the effect"""
            ctypes.pythonapi.PyThreadState_SetAsyncExc(tid, None)
            raise SystemError("PyThreadState_SetAsyncExc failed")

    def stop_thread(self, thread):
        self._async_raise(thread.ident, SystemExit)

    def open_realsense(self):
        print('open_realsense')

        # Create a pipeline
        pipeline = rs.pipeline()

        # Create a config and configure the pipeline to stream
        #  different resolutions of color and depth streams
        config = rs.config()
        config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
        config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)

        # Start streaming
        profile = pipeline.start(config)

        # Getting the depth sensor's depth scale (see rs-align example for explanation)
        depth_sensor = profile.get_device().first_depth_sensor()
        depth_scale = depth_sensor.get_depth_scale()
        print("Depth Scale is: ", depth_scale)

        # We will be removing the background of objects more than
        #  clipping_distance_in_meters meters away
        clipping_distance_in_meters = 4  # 4 meter
        clipping_distance = clipping_distance_in_meters / depth_scale

        # Color Intrinsics
        # intr = color_frame.profile.as_video_stream_profile().intrinsics

        # Create an align object
        # rs.align allows us to perform alignment of depth frames to others frames
        # The "align_to" is the stream type to which we plan to align depth frames.
        align_to = rs.stream.color
        align = rs.align(align_to)
        # Streaming loop
        try:
            while True:
                # Wait for a coherent pair of frames: depth and color
                frames = pipeline.wait_for_frames()
                frames  = align.process(frames) # 帧对齐
                depth_frame = frames.get_depth_frame()
                color_frame = frames.get_color_frame()

                profile = frames.get_profile()
                if not depth_frame or not color_frame:
                    continue

                # Convert images to numpy arrays
                depth_image = np.asanyarray(depth_frame.get_data())
                color_image = np.asanyarray(color_frame.get_data())

                #实时显示当前深度图和彩色图
                depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(
                    depth_image, alpha=0.03), cv2.COLORMAP_JET)

                # Remove background - Set pixels further than clipping_distance to grey
                RemoveBackground = True
                grey_color = 153
                depth_image_3d = np.dstack(
                    (depth_image, depth_image, depth_image))  # depth image is 1 channel, color is 3 channels
                if RemoveBackground == True:
                    depth_image_3d = np.where((depth_image_3d > clipping_distance) | (depth_image_3d < 0), grey_color,
                                              color_image)
                else:
                    depth_image_3d = color_image
                images = np.hstack((depth_image_3d, depth_colormap))
                qimage = QImage(images, 1280, 480, QImage.Format_BGR888)
                pixmap = QPixmap.fromImage(qimage)
                self.dis_update.emit(pixmap)

                #获得点云数据
                o3d_color = o3d.geometry.Image(color_image)
                o3d_depth = o3d.geometry.Image(depth_image)
                rgbd_image = o3d.geometry.RGBDImage.create_from_color_and_depth(o3d_color, o3d_depth,
                                                                                depth_scale=1000.0,
                                                                                depth_trunc=3.0,
                                                                                convert_rgb_to_intensity=False)

                intrinsics = profile.as_video_stream_profile().get_intrinsics() #此处profile指向color_frame
                # intrinsics = color_frame.profile.as_video_stream_profile().intrinsics
                # 转换为open3d中的相机参数
                pinhole_camera_intrinsic = o3d.camera.PinholeCameraIntrinsic(
                    intrinsics.width, intrinsics.height,
                    intrinsics.fx, intrinsics.fy,
                    intrinsics.ppx, intrinsics.ppy
                )
                # 获取pcd
                o3d_result = o3d.geometry.PointCloud.create_from_rgbd_image(
                    rgbd_image,
                    pinhole_camera_intrinsic
                )

                if(self.savePointCLoud == True):

                    now_date = dt.datetime.now().strftime('%F')
                    now_time = dt.datetime.now().strftime('%F_%H%M%S')

                    path_ok = os.path.exists(now_date)
                    if(path_ok == False):
                        os.mkdir(now_date)

                    if(os.path.isdir(now_date)):  # 在以当前日期命名的文件夹下
                        id = str(self.id)
                        self.id = self.id+1

                        pc_full_path_ply = os.path.join(
                            './', now_date, id + '.ply')
                        pc_full_path_stl = os.path.join(
                            './', now_date, id + '.stl')
                        pc_full_path_pcd = os.path.join(
                            './', now_date, id + '.pcd')
                        print(pc_full_path_ply)

                        self.label.setText(pc_full_path_ply)
                        #输出.ply
                        o3d.io.write_point_cloud(pc_full_path_ply, o3d_result)

                        #read ply
                        mesh_ply = o3d.io.read_triangle_mesh(pc_full_path_ply)
                        mesh_ply.compute_vertex_normals()

                        # V_mesh 为ply网格的顶点坐标序列,shape=(n,3),这里n为此网格的顶点总数,其实就是浮点型的x,y,z三个浮点值组成的三维坐标
                        V_mesh = np.asarray(mesh_ply.vertices)

                        # print("ply info:", mesh_ply)
                        # print("ply vertices shape:", V_mesh.shape)
                        # o3d.visualization.draw_geometries([mesh_ply], window_name="ply", mesh_show_wireframe=True)

                        # ply -> stl
                        mesh_stl = o3d.geometry.TriangleMesh()
                        mesh_stl.vertices = o3d.utility.Vector3dVector(V_mesh)

                        mesh_stl.compute_vertex_normals()
                        # print("stl info:", mesh_stl)
                        # o3d.visualization.draw_geometries([mesh_stl], window_name="stl")
                        # 输出.stl
                        o3d.io.write_triangle_mesh(pc_full_path_stl, mesh_stl)

                        o3d_result.transform([[1,0,0,0],[0,-1,0,0],[0,0,-1,0],[0,0,0,1]])
                        o3d.visualization.draw_geometries([o3d_result], window_name="pcd")
                        # save pcd
                        o3d.io.write_point_cloud(pc_full_path_pcd, o3d_result)

                        print('PointCloud Saved ply stl pcd')

                    self.savePointCLoud = False

                # time.sleep(DELAY)
        finally:
            pipeline.stop()


if __name__ == "__main__":
    app = QApplication(sys.argv)
    w = MainWindow()
    w.show()
    w.open_camera()
    print('Hello World!')
    sys.exit(app.exec_())

3.线程管理与Pyqt5:实时显示多Realsense相机获取RGB图像

Ui_Mainwindow.py和Main.py代码如下(只保留基础代码)

# Ui_Mainwindow.py
from PyQt5 import QtCore, QtGui, QtWidgets


class Ui_MainWindow(object):
    def setupUi(self, MainWindow):
        MainWindow.setObjectName("MainWindow")
        MainWindow.resize(1338, 625)
        self.centralwidget = QtWidgets.QWidget(MainWindow)
        self.centralwidget.setObjectName("centralwidget")
        self.Btn_takePhotos = QtWidgets.QPushButton(self.centralwidget)
        self.Btn_takePhotos.setGeometry(QtCore.QRect(30, 540, 75, 31))
        self.Btn_takePhotos.setObjectName("Btn_takePhotos")
        self.label_top = QtWidgets.QLabel(self.centralwidget)
        self.label_top.setGeometry(QtCore.QRect(40, 10, 701, 16))
        self.label_top.setObjectName("label_top")
        self.label_PhotoLeft = QtWidgets.QLabel(self.centralwidget)
        self.label_PhotoLeft.setGeometry(QtCore.QRect(40, 40, 640, 480))
        sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Preferred, QtWidgets.QSizePolicy.Preferred)
        sizePolicy.setHorizontalStretch(0)
        sizePolicy.setVerticalStretch(0)
        sizePolicy.setHeightForWidth(self.label_PhotoLeft.sizePolicy().hasHeightForWidth())
        self.label_PhotoLeft.setSizePolicy(sizePolicy)
        self.label_PhotoLeft.setObjectName("label_PhotoLeft")
        self.label_PhotoRight = QtWidgets.QLabel(self.centralwidget)
        self.label_PhotoRight.setGeometry(QtCore.QRect(680, 40, 640, 480))
        self.label_PhotoRight.setObjectName("label_PhotoRight")
        self.label_Directory = QtWidgets.QLabel(self.centralwidget)
        self.label_Directory.setGeometry(QtCore.QRect(230, 540, 771, 31))
        self.label_Directory.setObjectName("label_Directory")
        self.Btn_chooseDirectory = QtWidgets.QPushButton(self.centralwidget)
        self.Btn_chooseDirectory.setGeometry(QtCore.QRect(110, 540, 98, 31))
        self.Btn_chooseDirectory.setObjectName("Btn_chooseDirectory")
        self.label_PhotoLeft.raise_()
        self.Btn_takePhotos.raise_()
        self.label_top.raise_()
        self.label_PhotoRight.raise_()
        self.label_Directory.raise_()
        self.Btn_chooseDirectory.raise_()
        MainWindow.setCentralWidget(self.centralwidget)
        self.menubar = QtWidgets.QMenuBar(MainWindow)
        self.menubar.setGeometry(QtCore.QRect(0, 0, 1338, 22))
        self.menubar.setObjectName("menubar")
        MainWindow.setMenuBar(self.menubar)
        self.statusbar = QtWidgets.QStatusBar(MainWindow)
        self.statusbar.setObjectName("statusbar")
        MainWindow.setStatusBar(self.statusbar)

        self.retranslateUi(MainWindow)
        QtCore.QMetaObject.connectSlotsByName(MainWindow)

    def retranslateUi(self, MainWindow):
        _translate = QtCore.QCoreApplication.translate
        MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow"))
        self.Btn_takePhotos.setText(_translate("MainWindow", "拍照"))
        self.label_top.setText(_translate("MainWindow", "以640*480分辨率显示图片"))
        self.label_PhotoLeft.setText(_translate("MainWindow", "TextLabel"))
        self.label_PhotoRight.setText(_translate("MainWindow", "TextLabel"))
        self.label_Directory.setText(_translate("MainWindow", "当前无路径"))
        self.Btn_chooseDirectory.setText(_translate("MainWindow", "选择储存路径"))
# main.py
#使用Device manager进行多相机管理

#导入程序运行必须模块
import platform
import sys
#PyQt5中使用的基本控件都在PyQt5.QtWidgets模块中
from PyQt5.QtWidgets import QApplication, QMainWindow, QMessageBox,QFileDialog
#GraphicsView相关模块
from PyQt5.QtWidgets import QGraphicsScene, QGraphicsPixmapItem
#PyQt5核心模块
from PyQt5.QtCore import QObject, pyqtSignal, QThread
from cv2 import imread
#导入designer工具生成的Ui_MainWindow模块
from Ui_Multicamera_photograph import Ui_MainWindow
#格式转换模块
from PyQt5.QtGui import QImage, QPixmap
#Pyrealsense相关模块
import pyrealsense2 as rs
import cv2
#from skimage import io
from realsense_device_manager import DeviceManager  # 多相机管理
#其他重要模块
import os
import time
import datetime as dt
import numpy as np
import threading as th
#raise exception相关模块
import ctypes
import inspect
class MyMainForm(QMainWindow, Ui_MainWindow):
    def __init__(self, parent=None):
        #创建父对象
        super(MyMainForm, self).__init__(parent)
        #设置ui
        self.setupUi(self)
        #初始化控件
        self.label_Directory.setText('默认路径为当前文件夹')
        #初始化添加信号和槽
        # 当dis_update作为信号相应的时候触发camera_view
        self.dis_update_l.connect(self.camera_view_l)
        self.dis_update_r.connect(self.camera_view_r)
        self.Btn_takePhotos.clicked.connect(self.Btn_takePhotos_clicked)
        self.Btn_chooseDirectory.clicked.connect(self.choose_saveDirectory)
        self.thread_camera = None
        self.takePhotos = False
        self.saveDirectory = "./"
    # 在对应的页面类的内部,与def定义的函数同级
    dis_update_l = pyqtSignal(QPixmap)
    dis_update_r = pyqtSignal(QPixmap)

    def Btn_takePhotos_clicked(self):  # 控制是否进行拍照
        self.takePhotos = True
        
    def choose_saveDirectory(self):
        filepath = QFileDialog.getExistingDirectory(self, "选择文件保存路径", "/" )
        self.saveDirectory = filepath
        self.label_Directory.setText(self.saveDirectory)

    # 添加一个退出的提示事件
    def closeEvent(self, event):
        """我们创建了一个消息框,上面有俩按钮:Yes和No.第一个字符串显示在消息框的标题栏,第二个字符串显示在对话框,
              第三个参数是消息框的俩按钮,最后一个参数是默认按钮,这个按钮是默认选中的。返回值在变量reply里。"""

        reply = QMessageBox.question(self, 'Message', "Are you sure to quit?",
                                     QMessageBox.Yes | QMessageBox.No, QMessageBox.No)
        # 判断返回值,如果点击的是Yes按钮,我们就关闭组件和应用,否则就忽略关闭事件
        if reply == QMessageBox.Yes:
            self.stop_thread(self.thread_camera)
            event.accept()
        else:
            event.ignore()

    def open_camera(self):
        # target选择开启摄像头的函数
        #self.thread_camera = th.Thread(target=self.multiCamera_photograph_npImage)
        self.thread_camera = th.Thread(
            target=self.multiCamera_photograph_npImage)
        self.thread_camera.start()
        print('Open Camera')

    def camera_view_l(self, Pixmap):  # 左图像
        # 调用setPixmap函数设置显示Pixmap
        self.label_PhotoLeft.setPixmap(Pixmap)
        # 调用setScaledContents将图像比例化显示在QLabel上
        self.label_PhotoLeft.setScaledContents(True)

    def camera_view_r(self, Pixmap):  # 右图像
        # 调用setPixmap函数设置显示Pixmap
        self.label_PhotoRight.setPixmap(Pixmap)
        # 调用setScaledContents将图像比例化显示在QLabel上
        self.label_PhotoRight.setScaledContents(True)

    def _async_raise(self, tid, exctype):
        """raises the exception, performs cleanup if needed"""
        tid = ctypes.c_long(tid)
        if not inspect.isclass(exctype):
            exctype = type(exctype)
        res = ctypes.pythonapi.PyThreadState_SetAsyncExc(
            tid, ctypes.py_object(exctype))
        if res == 0:
            raise ValueError("invalid thread id")
        elif res != 1:
            # """if it returns a number greater than one, you're in trouble,
            # and you should call it again with exc=NULL to revert the effect"""
            ctypes.pythonapi.PyThreadState_SetAsyncExc(tid, None)
            raise SystemError("PyThreadState_SetAsyncExc failed")

    def stop_thread(self, thread):
        self._async_raise(thread.ident, SystemExit)

    def multiCamera_photograph_npImage(self):
        resolution_width = 640  # pixels #不用1280而用640是为了与深度图像保持分辨率一致
        resolution_height = 480  # pixels
        frame_rate = 30  # fps

        # frames the auto-exposure controller to stablise
        dispose_frames_for_stablisation = 30
        try:
            # Enable the streams from all the intel realsense devices
            rs_config = rs.config()
            rs_config.enable_stream(rs.stream.depth, resolution_width,
                                    resolution_height, rs.format.z16, frame_rate)
            rs_config.enable_stream(rs.stream.infrared, 1, resolution_width,
                                    resolution_height, rs.format.y8, frame_rate)
            rs_config.enable_stream(rs.stream.color, resolution_width,
                                    resolution_height, rs.format.bgr8, frame_rate)

            # Use the device manager class to enable the devices and get the frames
            device_manager = DeviceManager(rs.context(), rs_config)
            device_manager.enable_all_devices()

            # Allow some frames for the auto-exposure controller to stablise
            for frame in range(dispose_frames_for_stablisation):
                frames = device_manager.poll_frames()

            assert(len(device_manager._available_devices) > 0)

            # Continue acquisition until terminated with Ctrl+C by the user
            images_color = []  # 存放多个相机的图像
            images_depth = []  # 存放多个相机的图像
            serials = []  # 存放多个相机的编号
            numOfPhotos = 0 # 拍摄组编号
            while 1:
                #_enabled_devices[device_serial] = (Device(pipeline, pipeline_profile, product_line))
                for (serial, device) in device_manager._enabled_devices.items():
                    # 此处可以通过device.pipeline达到与pipeline = rs.pipeline()
                    # serial: 相机编号
                    # device: Device(pipeline, pipeline_profile, product_line) #见realSense_device_manager
                    # TODO:对每个相机流进行操作 
                    streams = device.pipeline_profile.get_streams()
                    # frameset will be a pyrealsense2.composite_frame object
                    frames = device.pipeline.wait_for_frames()
                    color_frame = frames.get_color_frame()
                    color_image = np.asanyarray(color_frame.get_data())
                    depth_frame = frames.get_depth_frame()
                    depth_image = np.asanyarray(depth_frame.get_data())
                    # 存储多图片
                    images_color.append(color_image)
                    images_depth.append(depth_image)
                    serials.append(serial)
                    ##################################
                # Save the results
                if(self.takePhotos == True):
                    now_date = dt.datetime.now().strftime('%F')
                    now_time = dt.datetime.now().strftime('%F_%H%M%S')

                    # 新建一个以当天日期为名字的文件夹
                    # path_ok = os.path.exists(now_date) # os.path.exists默认检测当前文件夹
                    nowDateSaveDirectory = os.path.join(self.saveDirectory,now_date)
                    path_ok = os.path.exists(nowDateSaveDirectory)
                    if(path_ok == False):
                        os.mkdir(nowDateSaveDirectory)

                    if(os.path.isdir(nowDateSaveDirectory)):
                        #id = self.lineEdit_id.text()
                        id = str(numOfPhotos)
                        # if (re.match('^[a-zA-Z0-9_]*$', id) and (id != '')):
                        depth_full_path_l = os.path.join(
                            './', now_date, id + '_l_depth.png')
                        depth_full_path_r = os.path.join(
                            './', now_date, id + '_r_depth.png')
                        color_full_path_l = os.path.join(
                            './', now_date, id + '_l_color.png')
                        color_full_path_r = os.path.join(
                            './', now_date, id + '_r_color.png')
                        self.label_Directory.setText(nowDateSaveDirectory)
                        # 保存color_image
                        cv2.imencode('.png', images_color[0])[1].tofile(color_full_path_l)
                        cv2.imencode('.png', images_color[1])[1].tofile(color_full_path_r)
                        # 保存depth_image
                        cv2.imencode('.png', images_depth[0])[1].tofile(depth_full_path_l)
                        cv2.imencode('.png', images_depth[1])[1].tofile(depth_full_path_r)
                        # print('ok')
                    self.takePhotos = False
                    numOfPhotos+=1
                # Visualise the results
                flag = False  # 是否是第一张图片
                for image in images_color:
                    if flag == False:
                        # #将第一张图片放到左边
                        MyQImage = QImage(
                            image, resolution_width, resolution_height, QImage.Format_BGR888)
                        pixmap = QPixmap.fromImage(MyQImage)
                        self.dis_update_l.emit(pixmap)  # 左
                        #print('test image l done')
                        time.sleep(DELAY)
                        flag = True
                    elif flag == True:
                        #将第二张图片放到右边
                        MyQImage = QImage(
                            image, resolution_width, resolution_height, QImage.Format_BGR888)
                        pixmap = QPixmap.fromImage(MyQImage)
                        self.dis_update_r.emit(pixmap)  # 右
                        #print('test image r done')
                        time.sleep(DELAY)
                multi_serial = ""
                flag = False
                for serial in serials:
                    if flag == False:
                        multi_serial = multi_serial + serial
                        flag = True
                    elif flag == True:
                        multi_serial = multi_serial + " and " + serial
                self.label_top.setText(
                    'Color image from RealSense Device Nr: ' + multi_serial)
                #清除当前缓存的数据
                images_color.clear()
                images_depth.clear()
                #time.sleep(DELAY)

        except KeyboardInterrupt:
            print("The program was interupted by the user. Closing the program...")

        finally:
            device_manager.disable_streams()

if __name__ == "__main__":
    #固定的,PyQt5程序都需要QApplication对象。sys.argv是命令行参数列表,确保程序可以双击运行
    app = QApplication(sys.argv)
    #初始化
    myWin = MyMainForm()
    #将窗口控件显示在屏幕上
    myWin.show()
    myWin.open_camera()

    print("started")
    #程序运行,sys.exit方法确保程序完整退出。
    sys.exit(app.exec_())

需要注意的是:
1)通过self.dis_update.connect(self.camera_view)将两者连接起来,camera_view将获取到的图像实例化显示到QLabel上,dis_update定义在页面内部
2)对于打开Realsense获取图像,并不是直接运行函数,而是通过新建线程的方式

self.thread_camera = th.Thread(target=self.multiCamera_photograph_npImage)
self.thread_camera.start()

3)需要单独设置关闭弹出窗口,并通过调用_async_raise释放线程
4)这个代码原本是想作出多相机对应多控件的效果,但后面被np.hstack()方案代替了(遗留代码懒得重构了XD)


总结

  • 2
    点赞
  • 34
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值