启动realsense相机
平台及系统
jetson nano
Ubuntu18.04
参考教程:https://github.com/IntelRealSense/librealsense/tree/master/wrappers/python#examples
尝试过pip安装但是失败了,读友可以试试看
需要之前下载过librealsense
主要步骤:
sudo apt-get update && sudo apt-get upgrade
sudo apt-get install python python-dev
或者
sudo apt-get install python3 python3-dev
- 进入librealsense目录,之前应该创建过build目录,直接进去就好
mkdir build
cd build
cmake ../ -DBUILD_PYTHON_BINDINGS:bool=true
make -j4
sudo make install
-
注意上一步编译的过程中,librealsense2.so和pyrealsense2.so两个文件的安装位置,不过应该是因为我的是python3的缘故,pyrealsense2.so没有而是出现了pyrealsense2.cpython-35m-arm-linux-gnueabihf.so文件你可以把它改名成pyrealsense2.so
然后把这两个文件拷贝到你的python文件工程夹下就可以在代码里使用了 -
检测是否能启动realsense相机,我用的是sr300,然后成功启动了
以下是实例代码,参考自https://github.com/IntelRealSense/librealsense/blob/development/wrappers/python/examples/opencv_viewer_example.py
注意点:
需要把pyrealsense2.cpython-35m-arm-linux-gnueabihf.so文件改名成pyrealsense2.so不然会报错
numpy事先安装好就行
opencv 是jetson nano的系统自带的,无需安装
## License: Apache 2.0. See LICENSE file in root directory.
## Copyright(c) 2015-2017 Intel Corporation. All Rights Reserved.
###############################################
## Open CV and Numpy integration ##
###############################################
import pyrealsense2 as rs
import numpy as np
import cv2
# Configure depth and color streams
pipeline = rs.pipeline()
config = rs.config()
# Get device product line for setting a supporting resolution
pipeline_wrapper = rs.pipeline_wrapper(pipeline)
pipeline_profile = config.resolve(pipeline_wrapper)
device = pipeline_profile.get_device()
device_product_line = str(device.get_info(rs.camera_info.product_line))
found_rgb = False
for s in device.sensors:
if s.get_info(rs.camera_info.name) == 'RGB Camera':
found_rgb = True
break
if not found_rgb:
print("The demo requires Depth camera with Color sensor")
exit(0)
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
if device_product_line == 'L500':
config.enable_stream(rs.stream.color, 960, 540, rs.format.bgr8, 30)
else:
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)
# Start streaming
pipeline.start(config)
try:
while True:
# Wait for a coherent pair of frames: depth and color
frames = pipeline.wait_for_frames()
depth_frame = frames.get_depth_frame()
color_frame = frames.get_color_frame()
if not depth_frame or not color_frame:
continue
# Convert images to numpy arrays
depth_image = np.asanyarray(depth_frame.get_data())
color_image = np.asanyarray(color_frame.get_data())
# Apply colormap on depth image (image must be converted to 8-bit per pixel first)
depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)
depth_colormap_dim = depth_colormap.shape
color_colormap_dim = color_image.shape
# If depth and color resolutions are different, resize color image to match depth image for display
if depth_colormap_dim != color_colormap_dim:
resized_color_image = cv2.resize(color_image, dsize=(depth_colormap_dim[1], depth_colormap_dim[0]), interpolation=cv2.INTER_AREA)
images = np.hstack((resized_color_image, depth_colormap))
else:
images = np.hstack((color_image, depth_colormap))
# Show images
cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)
cv2.imshow('RealSense', images)
cv2.waitKey(1)
finally:
# Stop streaming
pipeline.stop()