A good article :csi cameras on the TX2 (the easy Way)

转自A good article :csi cameras on the TX2 (the easy Way)
I love Nvidia’s new embedded computers. The Nvidia Jetson embedded computing product line, including the TK1, TX1, and TX2, are a series of small computers made to smoothly run software for computer vision, neural networks, and artificial intelligence without using tons of energy. Better yet, their developer kits can be used as excellent single board computers, so if you’ve ever wished for a beefed up Raspberry Pi, this is what you are looking for. I personally use the Jetson TX2, which is the most powerful module available and is widely used.

One of the big fallbacks with Jetson devices is that the documentation does not (and cannot) cover all use cases. The community has yet to mature to the point where you can find some random blog’s guide on any random thing you need to do (à la Raspberry Pi and Arduino), so you’ll often have to figure out things for yourself.

But, I am here to dispell the mystery around at least one thing — using CSI cameras on your TX2. These methods should work on other Jetson devices too!

We’re going to look at utilizing the Jetson’s image processing powers and capturing video from the TX2’s own special CSI camera port. Specifically, I’ll show you:

Why you’d even want a CSI camera.
Where to get a good CSI camera.
How to get high resolution, high framerate video off your CSI cameras using gstreamer and the Nvidia multimedia pipeline.
How to use that video in OpenCV and ROS.

Table of Contents

Why CSI cameras (vs USB)?
    Why do CSI cameras perform better than USB?
    Where to get CSI cameras (for Jetson devices)
Getting Video off a CSI camera
    Selecting the right pipelines
    Command line tools
    OpenCV
        Compiling OpenCV 3 with GStreamer support on Nvidia Jetson
        Video Capture from GStreamer pipeline in OpenCV
    Robot Operating System (ROS)

Why CSI cameras (vs USB)?

CSI cameras should be your primary choice of camera if you are looking to push for maximum performance (in terms of FPS, resolution, and CPU usage) or if you need low-level control of your camera — and if you are willing to pay a premium for these features.

I personally use CSI cameras because I need high resolution video while maintaining acceptable framerate. With the TX2 and a Leopard Imaging IMX377CS I easily pull 4k video at ~20 fps. Awesome. I also like the ability to swap out lenses on CSI cameras, which typically use small format C-Mount or M12 lenses. Due to the popularity of the GoPro, there are plenty of C/CS-Mount lenses as well as lens adapters for converting DSLR camera lenses to C-Mount.

USB cameras, on the other hand, can be incredibly cheap, typically work out of the box via the V4L2 protocol, and are an excellent choice for applications where you don’t need high-performance video. You can get 720p video for only $20 using the Logitech C270, as California Polytechnic State University did in their well documented ‘Jet’ Robot Kit, which was enough to make their robot toy car identify and collect objects, find faces, locates lines, etc.

An awesome post on the Nvidia developer forums by user Jazza points out even further comparisons between USB and CSI cameras:

USB Cameras:

    Are easy to integrate.
    Can do a lot of the image work off-board (exposure control, frame rate, etc).
    Many provide inputs/interrupts that can help time your application (e.g. interrupt on new frame).
    Use CPU time due to USB bus, this will impact your application if it uses 100% CPU.
    Are not optimal for use of hardware vision pipeline (hardware encoders, etc).
    Can work over long distances (up to max of USB standard).
    Can support larger image sensors (1″ and higher for better image quality and less noise).

CSI Bus Cameras:

    Optimized in terms of CPU and memory usage for getting images processed and into memory.
    Can take full advantage of hardware vision pipeline.
    Short distances from TX1 only (10cm max usually) unless you use serialization systems (GMSL, FPD Link, COAXPress, Ambarella) which are immature and highly custom at the moment.
    Are mostly smaller sensors from phone camera modules but custom ones can be made at a price. The added noise from the smaller sensor can be mitigated a bit through the hardware denoise in TX1/2.
    Gives you access to low-level control of the sensor/camera.

I recommend you check out the full post for further insights, such as considerations for networked cameras.
Why do CSI cameras perform better than USB?

The biggest issue with USB is bandwidth and processing needs. USB 3.0 can push 5 Gbps, which is technically enough to allow push uncompressed 1080p video stream at 60fps or even 4k (3840p) at 20 fps (see for yourself). But, this based on bandwidth alone and does not reveal the truth of the additional processing and memory management bottlenecks in handling the video. For example, the See3CAM_CU130 USB 3.0 camera should be capable of 60fps 1080p, but in a real world test on the TK1 it only eked out 18 fps at 1080p compressed and a paltry 1fps uncompressed. While performance would be better on a more powerful machine, this is evidence of the problem.

In contrast, the Jetson TX1 and TX2 utilizes “six dedicated MIPI CSI-2 camera ports that provide up to 2.5 Gb/s per lane of bandwidth and 1.4 gigapixels/s processing by dual Image Service Processors (ISP).” In other words, it has the bandwidth for three 4k cameras (or six 1080p cameras at 30 fps). Again, bandwidth isn’t everything because those images need to be moved and processes, but by using the hardware vision pipeline the images skip loading into DRAM and reduces CPU load by processing video independently of the primary CPU. In my own experience, I’ve been able to run 4k video at ~20 fps by utilizing these hardware features on the TX2. This is why video works so efficiently through CSI cameras — independent hardware specialized for video, much like a GPU is specialized for 3D graphics.
Where to get CSI cameras (for Jetson devices)

In my own research, I’ve found only a handful of resources on finding CSI cameras. The Jetson Wiki has a decent page surveying different camera options and you may be able to find some tips on the Jetson developer forms, but that’s about it.

As for actual shops, there is:

e-con Systems, one of the early entrants into making CSI cameras you can plug directly into a developer kit.
Leopard Imaging, who is the official camera partner with Nvidia.

I personally use the Leopard Imaging IMX377CS and find it quite capable. Plus, they have pretty good instructions for installing the drivers, which is always welcome.
Getting Video off a CSI camera

In Nvidia’s “Get Started with the JetPack Camera API” they explain that the best way to interface with the Jetson’s multimedia hardware (including the ports for CSI cameras) is via their libargus C++ library or through gstreamer. Nvidia does not support the V4L2 video protocol for CSI cameras. Since gstreamer is well documented and very common, I’ve focused on it.

GStreamer is configured using pipelines, which explain the series of operations applied to your video stream from input to output. The crux of getting video from your CSI camera boils down to being able to (1) use gstreamer in your program and (2) to use efficient pipelines.

A Note on Drivers: You will most likely need to install the drivers for your camera before any of the GStreamer functionality will even work. Since CSI cameras tend to be a smaller market, you might not find a guide online but should be able to get one from the manufacturer. Leopard Imaging, for example, provided a nice guide (over email) for setting up their drivers, but it only got me to the point of using GStreamer in the terminal. In this post, we’ll venture further and get that data into your code. 

Selecting the right pipelines

As I just mentioned, one of the keys to getting quality performance with CSI cameras is using the most efficient gstreamer pipelines. This generally means outputting in the correct format. You will see me repeatedly use a pipeline along the lines of:
nvcamerasrc ! video/x-raw(memory:NVMM), width=1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1 ! nvvidconv flip-method=2 ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR !appsink

nvcamerasrc ! video/x-raw(memory:NVMM), width=1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1 ! nvvidconv flip-method=2 ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink

The very important part here is video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR, which ensures that the raw video from the CSI camera is converted to the BGR color space.

In the case of OpenCV and many other programs, images are stored in this BGR format. By using the image pipeline to pre-convert to BGR, we ensure that those hardware modules are used to convert the images rather than the CPU. In my own experimentation, using a pipeline without this conversion results in horrible performance, at about 10fps max for 1080p video on the TX2.
Command line tools

There are a few command line tools I’ll briefly note.

nvgstcapture

nvgstcapture-1.0 is a program included with L4T that makes it easy to capture and save video to file. It’s also a quick way to pull up the view from your camera.

gst-launch

You can run a GStreamer pipeline with gst-launch-1.0 .

Example 1: View 1080p video from your camera

gst-launch-1.0 nvcamerasrc ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)60/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)I420' ! nvoverlaysink -e

Example 2: View 1080p video from your camera and print the true fps to console.

gst-launch-1.0 nvcamerasrc ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)60/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)I420' ! fpsdisplaysink text-overlay=false -v

Check out this Gstreamer pipelines for Tegra X1 guide for more example pipelines.

gst-inspect
You can inspect pipeline elements with gst-inspect-1.0

Example: Inspect the capabilities of the CSI camera interface.

gst-inspect-1.0 nvcamerasrc

gst-inspect-1.0 nvcamerasrc

OpenCV

Alright, so let’s start capturing video in our own code rather than just messing with stuff in the terminal.

When setting up your Jetson device, Nvidia Jetpack installs a special, closed source version of OpenCV called OpenCV4Tegra, which is optimized for Jetson and is slightly faster than the open source version. While it is nice that OpenCV4Tegra runs faster than plain OpenCV 2, all versions of OpenCV 2 do not support video capture from gstreamer, so we won’t be able to easily grab video from it.

OpenCV 3 does support capturing video from gstreamer if you compile it from source with the correct options. So we’ll replace OpenCV4Tegra with a self-compiled OpenCV 3. Once this is done, it is quite easy to capture video via a gstreamer pipeline.
Compiling OpenCV 3 with GStreamer support on Nvidia Jetson

Remove OpenCV4Tegra by running:

sudo apt-get purge libopencv4tegra-dev libopencv4tegra
sudo apt-get purge libopencv4tegra-repo
sudo apt-get update


sudo apt-get purge libopencv4tegra-dev libopencv4tegra
sudo apt-get purge libopencv4tegra-repo
sudo apt-get update

Download Jetson Hacks’ Jetson TX2 OpenCV installer:

git clone https://github.com/jetsonhacks/buildOpenCVTX2.git
cd buildOpenCVTX2


(More info on this script at Jetson Hacks’ own install guide.)

Open buildOpenCV.sh and change the line -DWITH_GSTREAMER=OFF \ to -DWITH_GSTREAMER=ON \. This will ensure OpenCV is compiled with gstreamer support.

Build OpenCV by running the install script. This will take some time.

./buildOpenCV.sh

Jetson Hacks also warns that “sometimes the make tool does not build everything. Experience dictates to go back to the build directory and run make again, just to be sure.” I recommend the same. Check out their video guide if you really need help.

Finally, switch to the build directory to install the libraries you just built.

cd ~/opencv/build
sudo make install

Video Capture from GStreamer pipeline in OpenCV

We now have an installation of OpenCV that can capture video from gstreamer, let’s use it! Luckily, I have a nice C++ example script on Github designed to capture and display video from gstreamer with OpenCV. Let’s take a look.

gstreamer_view.cpp
C++
/*
Example code for displaying gstreamer video from the CSI port of the Nvidia Jetson in OpenCV.
Created by Peter Moran on 7/29/17.
https://gist.github.com/peter-moran/742998d893cd013edf6d0c86cc86ff7f
*/

“`#include

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值