嵌入式监控【v4l2采集->vpu编码->live555推流】

嵌入式监控【v4l2采集->vpu编码->live555推流】

在上一博客中介绍了视频监控的采用库和相关架构,这篇博客作为优化和补充,主要优化的地方为由v4l2代替opencv的采集,由vpu编码代替x264

介绍

应为x264在arm a9开发版中占用大量cpu资源,导致视频卡顿,所以决定采用使用v4l2来采集uvc数据,使用vpu编码。
开发版为imx6q,版载有vpu/ipu,可以代替x264的软算法,释放cpu资源。

数据流图

v4l2 -> yuyv -> yuv420 -> vpu -> h264
数据流验证过程:

  1. v4l2采集yuyv数据,写入到文件,使用pyuv播放验证
  2. 转码yuyv到yuv420后,写入到文件,使用pyuv播放验证
  3. 将yuv420文件使用vpu编码为h264后,使用live555推流,vlc远程播放验证

一、v4l2

此处为了精简采集uvc(usb video cam)的采集过程,直接使用v4l2采集,而没有使用opencv,opencv采集的数据Mat(RGB)需要在二次转化。

1.1 确定cam的输出格式

摄象头有两种,一种是模组形式,他的输出格式多为MJPG,另一种是uvc形式,他的输出格式为yuyv(422).
如何判断摄象头类型?一般使用usb接口的摄象头为uvc,而内嵌到板子上的多为MJPG,可以使用命令工具v4l2-ctl来判断.

下面是我的usb 罗技摄象头:

root@zjy-T440:~/workStation/crossGcc/vpu/mxc_vpu_test# v4l2-ctl -d /dev/video0 --all
Driver Info (not using libv4l2):
	Driver name   : uvcvideo
	Card type     : Integrated Camera: Integrated C
	Bus info      : usb-0000:00:14.0-8
	Driver version: 5.3.18
	Capabilities  : 0x84A00001
		Video Capture
		Metadata Capture
		Streaming
		Extended Pix Format
		Device Capabilities
	Device Caps   : 0x04200001
		Video Capture
		Streaming
		Extended Pix Format
Priority: 2
Video input : 0 (Camera 1: ok)
Format Video Capture:
	Width/Height      : 640/480
	Pixel Format      : 'YUYV'
	Field             : None
	Bytes per Line    : 1280
	Size Image        : 614400
	Colorspace        : sRGB
	Transfer Function : Default (maps to sRGB)
	YCbCr/HSV Encoding: Default (maps to ITU-R 601)
	Quantization      : Default (maps to Limited Range)
	Flags             : 
Crop Capability Video Capture:
	Bounds      : Left 0, Top 0, Width 640, Height 480
	Default     : Left 0, Top 0, Width 640, Height 480
	Pixel Aspect: 1/1
Selection: crop_default, Left 0, Top 0, Width 640, Height 480
Selection: crop_bounds, Left 0, Top 0, Width 640, Height 480
Streaming Parameters Video Capture:
	Capabilities     : timeperframe
	Frames per second: 30.000 (30/1)
	Read buffers     : 0
                     brightness 0x00980900 (int)    : min=0 max=255 step=1 default=128 value=128
                       contrast 0x00980901 (int)    : min=0 max=255 step=1 default=32 value=32
                     saturation 0x00980902 (int)    : min=0 max=100 step=1 default=64 value=64
                            hue 0x00980903 (int)    : min=-180 max=180 step=1 default=0 value=0
 white_balance_temperature_auto 0x0098090c (bool)   : default=1 value=1
                          gamma 0x00980910 (int)    : min=90 max=150 step=1 default=120 value=120
           power_line_frequency 0x00980918 (menu)   : min=0 max=2 default=1 value=1
      white_balance_temperature 0x0098091a (int)    : min=2800 max=6500 step=1 default=4000 value=4000 flags=inactive
                      sharpness 0x0098091b (int)    : min=0 max=7 step=1 default=2 value=2
         backlight_compensation 0x0098091c (int)    : min=0 max=2 step=1 default=1 value=1
                  exposure_auto 0x009a0901 (menu)   : min=0 max=3 default=3 value=3
              exposure_absolute 0x009a0902 (int)    : min=4 max=1250 step=1 default=166 value=166 flags=inactive
         exposure_auto_priority 0x009a0903 (bool)   : default=0 value=1

下面为开发版摄象头:

root@imx6qsabresd:/home# v4l2-ctl -d /dev/video2 --all
Driver Info (not using libv4l2):
	Driver name   : uvcvideo
	Card type     : USB Camera
	Bus info      : usb-ci_hdrc.1-1.2
	Driver version: 4.1.15
	Capabilities  : 0x84200001
		Video Capture
		Streaming
		Extended Pix Format
		Device Capabilities
	Device Caps   : 0x04200001
		Video Capture
		Streaming
		Extended Pix Format
Priority: 2
Video input : 0 (Camera 1: ok)
Format Video Capture:
	Width/Height  : 1280/720
	Pixel Format  : 'MJPG'
	Field         : None
	Bytes per Line: 0
	Size Image    : 1843200
	Colorspace    : SRGB
	Flags         : 
Crop Capability Video Capture:
	Bounds      : Left 0, Top 0, Width 1280, Height 720
	Default     : Left 0, Top 0, Width 1280, Height 720
	Pixel Aspect: 1/1
Selection: crop_default, Left 0, Top 0, Width 1280, Height 720
Selection: crop_bounds, Left 0, Top 0, Width 1280, Height 720
Streaming Parameters Video Capture:
	Capabilities     : timeperframe
	Frames per second: 30.000 (30/1)
	Read buffers     : 0
                     brightness (int)    : min=-64 max=64 step=1 default=0 value=0
                       contrast (int)    : min=0 max=100 step=1 default=50 value=50
                     saturation (int)    : min=0 max=100 step=1 default=50 value=50
                            hue (int)    : min=-180 max=180 step=1 default=0 value=0
 white_balance_temperature_auto (bool)   : default=1 value=1
                          gamma (int)    : min=100 max=500 step=1 default=300 value=300
           power_line_frequency (menu)   : min=0 max=2 default=1 value=1
      white_balance_temperature (int)    : min=2800 max=6500 step=10 default=4600 value=4600 flags=inactive
                      sharpness (int)    : min=0 max=100 step=1 default=50 value=50
         backlight_compensation (int)    : min=0 max=2 step=1 default=0 value=0
                  exposure_auto (menu)   : min=0 max=3 default=3 value=3
              exposure_absolute (int)    : min=50 max=10000 step=1 default=166 value=166 flags=inactive
         exposure_auto_priority (bool)   : default=0 value=1
                   pan_absolute (int)    : min=-57600 max=57600 step=3600 default=0 value=0
                  tilt_absolute (int)    : min=-43200 max=43200 step=3600 default=0 value=0
                  zoom_absolute (int)    : min=0 max=3 step=1 default=0 value=0
                     brightness (int)    : min=-64 max=64 step=1 default=0 value=0
                       contrast (int)    : min=0 max=100 step=1 default=50 value=50
                     saturation (int)    : min=0 max=100 step=1 default=50 value=50
                            hue (int)    : min=-180 max=180 step=1 default=0 value=0
 white_balance_temperature_auto (bool)   : default=1 value=1
                          gamma (int)    : min=100 max=500 step=1 default=300 value=300
           power_line_frequency (menu)   : min=0 max=2 default=1 value=1
      white_balance_temperature (int)    : min=2800 max=6500 step=10 default=4600 value=4600 flags=inactive
                      sharpness (int)    : min=0 max=100 step=1 default=50 value=50
         backlight_compensation (int)    : min=0 max=2 step=1 default=0 value=0

MJPG格式的cam,输出的jpeg图像,可以直接用来显示,直接一帧接着一帧就可以构成视频,但是如果想转化为h264 就需要先解码为yuv,再编码h264,所以此处采用uvc摄象头开发,uvc采集到的数据为yuyv。

1.2 YUYV 转 YUV420

为了使用vpu硬编码,需要转化为vpu的输入格式yuv420。
自行研究YUV格式,转换直接上代码:

 88 int YUV422To420(unsigned char yuv422[], unsigned char yuv420[], int width, int height)
 89 {
 90 
 91        int ynum=width*height;
 92        int i,j,k=0;
 93     //得到Y分量  
 94        for(i=0;i<ynum;i++){
 95            yuv420[i]=yuv422[i*2];
 96        }
 97     //得到U分量  
 98        for(i=0;i<height;i++){
 99            if((i%2)!=0)continue;
100            for(j=0;j<(width/2);j++){
101                if((4*j+1)>(2*width))break;
102                yuv420[ynum+k*2*width/4+j]=yuv422[i*2*width+4*j+1];
103                        }
104             k++;
105        }
106        k=0;
107     //得到V分量  
108        for(i=0;i<height;i++){
109            if((i%2)==0)continue;
110            for(j=0;j<(width/2);j++){
111                if((4*j+3)>(2*width))break;
112                yuv420[ynum+ynum/4+k*2*width/4+j]=yuv422[i*2*width+4*j+3];
113 
114            }
115             k++;
116        }
117 
118        return 1;
119 }
需要注意:YUYV和YUV420的帧大小计算
YUYV:h*w*2
YUV420: h*w*3/2
size(YUV420) = size(YUYV)*3/4
如果帧大小差异被忽略,导致花屏。

1.3 播放采集到的yuv420数据

研究YUV编码,一定用到一个播放YUV的播放软件pYUV,在linux可以安装使用。
在这里插入图片描述
size: 采集摄象头的帧大小
Color space: 色域空间,yuv
subsampling:由uvc直接采集到的是422,可自行转换420
interleaved:隔行扫描,经过转换420后的yuv不需要钩选
此处有个概念容易混淆,扫描方式和存储方式,此处的interleaved为存储方式,和v4l2中的fmt.pix.field = V4L2_FIELD_NONE不是同一个概念,v4l2中设置的是扫描方式。

二、vpu硬编码

nxp官方文档有提供imx6 vpu相关的文档:VPU_API_RM_L3.0.35_1.1.0.pdf
3.3.1.2
Encoder Operation Flow
To encode a bitstream, the application completes the following steps:

  1. Call vpu_Init() to initialize the VPU.
  2. Open a encoder instance by using vpu_EncOpen().
  3. Before starting a picture encoder operation, get crucial parameters for encoder operations such as required frame buffer
    size by using vpu_EncGetInitialInfo().
  4. By using the returned frame buffer requirement, allocate size of frame buffers and convey this information to the VPU
    by using vpu_EncRegisterFrameBuffer().
  5. Generate high-level header syntax by using vpu_EncGiveCommand().
  6. Start picture encoder operation picture-by-picture by using vpu_EncStartOneFrame().
  7. Wait the completion of picture encoder operation interrupt event.
  8. After encoding a frame is complete, check the results of encoder operation by using vpu_EncGetOutputInfo().
  9. If there are more frames to encode, go to Step 4. Otherwise, go to the next step.
  10. Terminate the sequence operation by closing the instance using vpu_EncClose().
  11. Call vpu_UnInit() to release the system resources.
    The encoder operation flow is shown in figure below.
    在这里插入图片描述

4.4.2.2
Encode Stream from Camera Captured Data
The application should complete the following steps to encode streams from camera captured data:

  1. Call vpu_Init() to initialize the VPU. If there are multi-instances supported in this application, this function only needs
    to be called once.
  2. Open a encoder instance using vpu_EncOpen(). Call IOGetPhyMem() to input encop.bitstreamBuffer for the physical
    continuous bitstream buffer before opening the instance. Call IOGetVirtMem() to get the corresponding virtual
    address of the bitstream buffer, then fill the bitstream to this address in user space. If rotation is enabled and the
    rotation angle is 90° or 270°, the picture width and height must be swapped.
  3. If rotation is enabled, give commands ENABLE_ROTATION and SET_ROTATION_ANGLE. If mirror is enabled,
    give commands ENABLE_MIRRORING and SET_MIRROR_DIRECTION.
  4. Get crucial parameters for encoder operations such as required frame buffer size, and so on using
    vpu_EncGetInitialInfo().
  5. Using the frame buffer requirement returned from vpu_DecGetInitialInfo(), allocate the proper size of the frame
    buffers and notify the VPU using vpu_EncRegisterFrameBuffer(). The requested frame buffer for the source frame in
    PATH_V4L2 to encode camera captured data is as follows:
    • Allocate the minFrameBufferCount frame buffers by calling IOGetPhyMem() and register them to the VPU for
    encoder using vpu_EncRegisterFrameBuffer().
    • Another frame buffer is needed for the source frame buffer. Call v4l_capture_setup() to open the v4l device for
    camera and request v4l buffers. In this example, three v4l buffers are allocated. Call v4l_start_capturing() to
    start camera capture. Pass the dequeued v4l buffer address by calling v4l_get_capture_data() as encoder source
    frame in each picture encoder, then no need to memory transfer for performance improvement.
  6. Generate the high-level header syntaxes using vpu_EncGiveCommand().
  7. Start picture encoder operation picture-by-picture using vpu_EncStartOneFrame(). Pass dequeued v4l buffer address
    by calling v4l_get_capture_data() as the encoder source frame before each picture encoder is started.
  8. Wait for the completion of picture decoder operation interrupt event calling vpu_WaitforInt(). Use vpu_IsBusy() to
    check if the VPU is busy. If the VPU is not busy, go to the next step; otherwise, wait again.
  9. After encoding a frame is complete, check the results of encoder operation using vpu_EncGetOutputInfo(). After the
    output information is received, call v4l_put_capture_data() to the VIDIOC_QBUF v4l buffer for the next capture
    usage.
  10. If there are more frames to encode, go to Step 7; otherwise, go to the next step.
  11. Terminate the sequence operation by closing the instance using vpu_DecClose(). Make sure
    vpu_DecGetOutputInfo() is called for each corresponding vpu_DecStartOneFrame() call before closing the instance
    although the last output information may be not useful.
  12. Free all allocated memory and v4l resource using IOFreePhyMem() and IOFreeVirtMem(). Call
    v4l_stop_capturing() to stop capture.
  13. Call vpu_UnInit() to release the system resources. If there are multi-instances supported in this application, this
    function only needs to be called once.

文档对vpu编解码都将的很清楚,并且开发版附带的资料中有mxc_vpu_test.out 的源码和帮助文档。

2.1 使用mxc_vpu_test.out 硬件编码h264

开发版内自带vpu测试工具及源码:mxc_vpu_test.out,先使用该工具转码yuv420为h264,验证数据正确性。

root@imx6qsabresd:/# cd /unit_tests/
root@imx6qsabresd:/unit_tests# ./mxc_vpu_test.out -E "-i out.yuv -w 624 -h 416 -f 2 -o file.264 -t 0"
root@imx6qsabresd:/home# ./mxc_vpu_test.out 
[INFO]	
Usage: ./mxc_vpu_test.out -D "<decode options>" -E "<encode options>" -L "<loopback options>" -C <config file> -T "<transcode options>" -H display this help 
 
decode options 
   -i <input file> Read input from file 
 	If no input file is specified, default is network 
   -o <output file> Write output to file 
 	If no output is specified, default is LCD 
   -x <output method> output mode V4l2(0) or IPU lib(1) 
         0 - V4L2 of FG device, 1 - IPU lib path 
         2 - G2D (available for Android only) 
         Other value means V4L2 with other video node
         16 - /dev/video16, 17 - /dev/video17, and so on 
   -f <format> 0 - MPEG4, 1 - H.263, 2 - H.264, 3 - VC1, 
 	4 - MPEG2, 5 - DIV3, 6 - RV, 7 - MJPG, 
         8 - AVS, 9 - VP8
 	If no format specified, default is 0 (MPEG4) 
   -l <mp4Class / h264 type> 
         When 'f' flag is 0 (MPEG4), it is mp4 class type. 
         0 - MPEG4, 1 - DIVX 5.0 or higher, 2 - XVID, 5 - DIVX4.0 
         When 'f' flag is 2 (H.264), it is h264 type. 
         0 - normal H.264(AVC), 1 - MVC 
   -p <port number> UDP port number to bind 
 	If no port number is secified, 5555 is used 
   -c <count> Number of frames to decode 
   -d <deblocking> Enable deblock - 1. enabled 
 	default deblock is disabled (0). 
   -e <dering> Enable dering - 1. enabled 
 	default dering is disabled (0). 
   -r <rotation angle> 0, 90, 180, 270 
 	default rotation is disabled (0) 
   -m <mirror direction> 0, 1, 2, 3 
 	default no mirroring (0) 
   -u <ipu/gpu rotation> Using IPU/GPU rotation for display - 1. IPU/GPU rotation 
         default is VPU rotation(0).
         This flag is effective when 'r' flag is specified.
   -v <vdi motion> set IPU VDI motion algorithm l, m, h.
 	default is m-medium. 
   -w <width> display picture width 
 	default is source picture width. 
   -h <height> display picture height 
 	default is source picture height 
   -j <left offset> display picture left offset 
 	default is 0. 
   -k <top offset> display picture top offset 
 	default is 0 
   -a <frame rate> display framerate 
 	default is 30 
   -t <chromaInterleave> CbCr interleaved 
         default is interleave(1). 
   -s <prescan/bs_mode> Enable prescan in decoding on i.mx5x - 1. enabled 
         default is disabled. Bitstream mode in decoding on i.mx6  
         0. Normal mode, 1. Rollback mode 
         default is enabled. 
   -y <maptype> Map type for GDI interface 
         0 - Linear frame map, 1 - frame MB map, 2 - field MB map 
         default is 0. 
 
encode options 
   -i <input file> Read input from file (yuv) 
 	If no input file specified, default is camera 
   -x <input method> input mode V4L2 with video node 
         0 - /dev/video0, 1 - /dev/video1, and so on 
   -o <output file> Write output to file 
 	This option will be ignored if 'n' is specified 
 	If no output is specified, def files are created 
   -n <ip address> Send output to this IP address 
   -p <port number> UDP port number at server 
 	If no port number is secified, 5555 is used 
   -f <format> 0 - MPEG4, 1 - H.263, 2 - H.264, 7 - MJPG 
 	If no format specified, default is 0 (MPEG4) 
   -l <h264 type> 0 - normal H.264(AVC), 1 - MVC
   -c <count> Number of frames to encode 
   -r <rotation angle> 0, 90, 180, 270 
 	default rotation is disabled (0) 
   -m <mirror direction> 0, 1, 2, 3 
 	default no mirroring (0) 
   -w <width> capture image width 
 	default is 176. 
   -h <height>capture image height 
 	default is 144 
   -b <bitrate in kbps> 
 	default is auto (0) 
   -g <gop size> 
 	default is 0 
   -t <chromaInterleave> CbCr interleaved 
         default is interleave(1). 
   -q <quantization parameter> 
 	default is 20 
   -a <frame rate> capture/encode framerate 
 	default is 30 
 
loopback options 
   -x <input method> input mode V4L2 with video node 
         0 - /dev/video0, 1 - /dev/video1, and so on 
   -f <format> 0 - MPEG4, 1 - H.263, 2 - H.264, 7 - MJPG 
 	If no format specified, default is 0 (MPEG4) 
   -w <width> capture image width 
 	default is 176. 
   -h <height>capture image height 
 	default is 144 
   -t <chromaInterleave> CbCr interleaved 
         default is interleave(1). 
   -a <frame rate> capture/encode/display framerate 
 	default is 30 
 
transcode options, encoder set to h264 720p now 
   -i <input file> Read input from file 
         If no input file is specified, default is network 
   -o <output file> Write output to file 
         If no output is specified, default is LCD 
   -x <output method> V4l2(0) or IPU lib(1) 
   -f <format> 0 - MPEG4, 1 - H.263, 2 - H.264, 3 - VC1, 
         4 - MPEG2, 5 - DIV3, 6 - RV, 7 - MJPG, 
         8 - AVS, 9 - VP8
         If no format specified, default is 0 (MPEG4) 
   -l <mp4Class / h264 type> 
         When 'f' flag is 0 (MPEG4), it is mp4 class type. 
         0 - MPEG4, 1 - DIVX 5.0 or higher, 2 - XVID, 5 - DIVX4.0 
         When 'f' flag is 2 (H.264), it is h264 type. 
         0 - normal H.264(AVC), 1 - MVC 
   -p <port number> UDP port number to bind 
         If no port number is secified, 5555 is used 
   -c <count> Number of frames to decode 
   -d <deblocking> Enable deblock - 1. enabled 
         default deblock is disabled (0). 
   -e <dering> Enable dering - 1. enabled 
         default dering is disabled (0). 
   -r <rotation angle> 0, 90, 180, 270 
         default rotation is disabled (0) 
   -m <mirror direction> 0, 1, 2, 3 
         default no mirroring (0) 
   -u <ipu rotation> Using IPU rotation for display - 1. IPU rotation 
         default is VPU rotation(0).
         This flag is effective when 'r' flag is specified.
   -v <vdi motion> set IPU VDI motion algorithm l, m, h.
         default is m-medium. 
   -w <width> display picture width 
         default is source picture width. 
   -h <height> display picture height 
         default is source picture height 
   -j <left offset> display picture left offset 
         default is 0. 
   -k <top offset> display picture top offset 
         default is 0 
   -a <frame rate> display framerate 
         default is 30 
   -t <chromaInterleave> CbCr interleaved 
         default is interleave(1). 
   -s <prescan/bs_mode> Enable prescan in decoding on i.mx5x - 1. enabled 
         default is disabled. Bitstream mode in decoding on i.mx6  
         0. Normal mode, 1. Rollback mode 
         default is enabled. 
   -y <maptype> Map type for GDI interface 
         0 - Linear frame map, 1 - frame MB map, 2 - field MB map 
   -q <quantization parameter> 
 	default is 20 
 
config file - Use config file for specifying options 

./mxc_vpu_test.out -E “-i out.yuv -w 624 -h 416 -f 2 -o file.264 -t 0”
-E 编码操作
-i 输入的yuv文件
-w -h 视频分辨率
-f 转码为h264
-o 输出文件名
-t 是否隔行存储
转码后生成file.h264,此时可以直接使用live555推流播放了。

三、使用live555MediaServer推流

此处参考上篇博文,使用移植后的live555MediaServer直接推流测试。

root@imx6qsabresd:/home# ./live555MediaServer 
LIVE555 Media Server
	version 0.99 (LIVE555 Streaming Media library version 2020.01.28).
Play streams from this server using the URL
	rtsp://192.168.2.11/<filename>
where <filename> is a file present in the current directory.
Each file's type is inferred from its name suffix:
	".264" => a H.264 Video Elementary Stream file
	".265" => a H.265 Video Elementary Stream file
	".aac" => an AAC Audio (ADTS format) file
	".ac3" => an AC-3 Audio file
	".amr" => an AMR Audio file
	".dv" => a DV Video file
	".m4e" => a MPEG-4 Video Elementary Stream file
	".mkv" => a Matroska audio+video+(optional)subtitles file
	".mp3" => a MPEG-1 or 2 Audio file
	".mpg" => a MPEG-1 or 2 Program Stream (audio+video) file
	".ogg" or ".ogv" or ".opus" => an Ogg audio and/or video file
	".ts" => a MPEG Transport Stream file
		(a ".tsx" index file - if present - provides server 'trick play' support)
	".vob" => a VOB (MPEG-2 video with AC-3 audio) file
	".wav" => a WAV Audio file
	".webm" => a WebM audio(Vorbis)+video(VP8) file
See http://www.live555.com/mediaServer/ for additional documentation.
(We use port 80 for optional RTSP-over-HTTP tunneling, or for HTTP live streaming (for indexed Transport Stream files only).)

推流后可以直接使用vlc等rtsp播放软件,播放rtsp://192.168.2.11/file.264 文件。

总结

最后以上数据链路打通后,开始着手整合代码。
源码整理中,敬请期待。

评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值