i.MX8M系列核心板多媒体功能评测

本文分别从硬件编解码、多屏显示和4K高清摄像头三部分对FETMX8MP-C的多媒体性能进行了测评,由测评及结果可得出结论:FETMX8MP-C具有更高效的视频处理方式,能够为用户带来更好的多媒体体验,可满足用户对复杂多媒体显示的需求,而4K高清摄像头也为用户提供了更加优秀的视频采集效果,为产品设计带来了更多可能。

下面看看详细评测结果:

一、硬件编解码

为了对FETMX8MP-C核心板的硬件编解码性能有更清晰的了解,选择飞凌嵌入式另外两款基于NXP iMX8M系列处理器设计的的FETMX8MM-C核心板和FETMX8MQ-C核心板与其做对比,用硬件解码播放同一个 H264 视频文件(带音频),CPU占用率对比如下:

FETMX8MQ-C核心板:

FETMX8MM-C核心板:

FETMX8MP-C核心板

如图可见,在飞凌嵌入式提供的三款iMX8M系列核心板中,当FETMX8MQ-C核心板用硬解码方式解码并播放测试用的 H264 视频文件时CPU占用率为23.5%,当FETMX8MM-C核心板用硬解码方式解码并播放同一个测试视频时CPU占用率为18.2%,而当FETMX8MP-C核心板用硬解码方式解码并播放同一个测试视频时CPU占用率仅为11.6%。

由此可见,虽然同为硬件解码,但FETMX8MP-C核心板比同系列处理器产品具有更高的硬件解码性能,CPU资源占用率更低。

二、多屏显示

双屏显示功能在飞凌嵌入式iMX6Q/iMX6DL系列核心板上就已经实现,而iMX8MP系列 核心板做到了三屏显示。在正式开始测试前,先简述一下用户使用单屏显示时的注意事项:FETMX8MP-C核心板支持LVDS、HDMI、MIPI-DSI三种显示接口,核心板默认自启为三屏显示状态,此状态下启动后只有MIPI屏显示QT测试程序列表界面。LVDS和HDMI显示为飞凌嵌入式的logo图片。故若用户只需要单屏显示,需要在启动阶段先进行屏幕配置,将其余屏幕关掉,具体操作可参考飞凌嵌入式提供的iMX8MP系列产品使用手册中2.4屏幕切换章节。

接下来,笔者将对多屏显示进行测评。

1、三屏异显

此次iMX8MP核心板测试三屏异显的方式是在主屏(MIPI屏)上播放视频,然后通过鼠标拖动视频在三屏之间移动,具体效果如下:

从视频中我们看到,三屏显示状态下,三个屏幕从左到右排序分别为MIPI-DSI、LVDS、HDMI。此测试过程因三个屏幕的分辨率不同,所以会出现同一个视频在拖动过程中大小有些许变化的情况,若用三个分辨率及尺寸相同的屏幕会有更好的显示效果。

此测评对三屏异显功能进行了简单的演示,工程师用户可根据实际需求在此功能基础上进行实现。

2、三屏同时播放视频

目前FETMX8MP-C核心板可用命令实现在三个屏幕上同时播放相同或不同的视频,如下命令为三屏播放相同视频:

root@OK8MP:~# gst-launch-1.0 playbin uri=file:///media/forlinx/video/1080p_60fps_h264.mp4 video-sink="waylandsink window-x=0 window-y=10" & gst-launch-1.0 playbin uri=file:///media/forlinx/video/1080p_60fps_h264.mp4 video-sink="waylandsink window-x=1152 window-y=120" & gst-launch-1.0 playbin uri=file:///media/forlinx/video/1080p_60fps_h264.mp4 video-sink="waylandsink window-x=2754 window-y=250"

蓝色部分可改成三个不同视频的路径,即可实现同时播放不同视频。

红色部分为视频显示的实际坐标,此处用做移动视频到其他屏(第一个在MIPI上,第二个在LVDS上,第三个在HDMI上,且都是居中显示)

如下图,展示的是三屏同时播放相同的视频,通过对命令的调整可使视频居中显示。

三、4K高清摄像头输入

目前FETMX8MP-C核心板除了支持OV5645 MIPI 摄像头和USB UVC摄像头外,还支持4K高清摄像头,型号为daA3840-30mc。daA3840-30mc 摄像头是NXP官方推荐的能为iMX 8M Plus处理器提供强大视觉系统的4K高清摄像头,可作为基于视觉的智能机器学习应用的解决方案。具体测试方法如下:

首先确认 basler 的设备节点。

root@OK8MP:~# v4l2-ctl --list-devices

():

/dev/v4l-subdev0

/dev/v4l-subdev3

/dev/v4l-subdev4

():

/dev/v4l-subdev1

(csi0):

/dev/v4l-subdev2

VIV (platform:viv0):

/dev/video0

VIV (platform:viv1):

/dev/video1

查看摄像头支持的格式与分辨率:

root@OK8MP:~# v4l2-ctl --list-formats-ext -d /dev/video1

ioctl: VIDIOC_ENUM_FMT

Type: Video Capture

[0]: 'YUYV' (YUYV 4:2:2)

Size: Discrete 3840x2160

Interval: Discrete 0.033s (30.000 fps)

[1]: 'NV12' (Y/CbCr 4:2:0)

Size: Discrete 3840x2160

Interval: Discrete 0.033s (30.000 fps)

[2]: 'NV16' (Y/CbCr 4:2:2)

Size: Discrete 3840x2160

Interval: Discrete 0.033s (30.000 fps)

[3]: 'BA12' (12-bit Bayer GRGR/BGBG)

Size: Discrete 3840x2160

Interval: Discrete 0.033s (30.000 fps)

摄像头预览:

root@OK8MP:~# gst-launch-1.0 v4l2src device=/dev/video1 ! waylandsink

在4K显示屏下,摄像头画质输出也可达到4K,显示清晰度极高。如需测试更多有关摄像头的功能(如摄像头拍照、录像),可参考飞凌嵌入式提供的iMX8MP系列产品使用手册。

综上,

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Hardware Video Encoding on iPhone — RTSP Server example On iOS, the only way to use hardware acceleration when encoding video is to use AVAssetWriter, and that means writing the compressed video to file. If you want to stream that video over the network, for example, it needs to be read back out of the file. I’ve written an example application that demonstrates how to do this, as part of an RTSP server that streams H264 video from the iPhone or iPad camera to remote clients. The end-to-end latency, measured using a low-latency DirectShow client, is under a second. Latency with VLC and QuickTime playback is a few seconds, since these clients buffer somewhat more data at the client side. The whole example app is available in source form here under an attribution license. It’s a very basic app, but is fully functional. Build and run the app on an iPhone or iPad, then use Quicktime Player or VLC to play back the URL that is displayed in the app. Details, Details When the compressed video data is written to a MOV or MP4 file, it is written to an mdat atom and indexed in the moov atom. However, the moov atom is not written out until the file is closed, and without that index, the data in mdat is not easily accessible. There are no boundary markers or sub-atoms, just raw elementary stream. Moreover, the data in the mdat cannot be extracted or used without the data from the moov atom (specifically the lengthSize and SPS and PPS param sets). My example code takes the following approach to this problem: Only video is written using the AVAssetWriter instance, or it would be impossible to distinguish video from audio in the mdat atom. Initially, I create two AVAssetWriter instances. The first frame is written to both, and then one instance is closed. Once the moov atom has been written to that file, I parse the file and assume that the parameters apply to both instances, since the initial conditions were the same. Once I have the parameters, I use a dispatch_source object to trigger reads from the file whenever new data is written. The body of the mdat chunk consists of H264 NALUs, each preceded by a length field. Although the length of the mdat chunk is not known, we can safely assume that it will continue to the end of the file (until we finish the output file and the moov is added). For RTP delivery of the data, we group the NALUs into frames by parsing the NALU headers. Since there are no AUDs marking the frame boundaries, this requires looking at several different elements of the NALU header. Timestamps arrive with the uncompressed frames from the camera and are stored in a FIFO. These timestamps are applied to the compressed frames in the same order. Fortunately, the AVAssetWriter live encoder does not require re-ordering of frames. When the file gets too large, a new instance of AVAssetWriter is used, so that the old temporary file can be deleted. Transition code must then wait for the old instance to be closed so that the remaining NALUs can be read from the mdat atom without reading past the end of that atom into the subsequent metadata. Finally, the new file is opened and timestamps are adjusted. The resulting compressed output is seamless. A little experimentation suggests that we are able to read compressed frames from file about 500ms or so after they are captured, and these frames then arrive around 200ms after that at the client app. Rotation For modern graphics hardware, it is very straightforward to rotate an image when displaying it, and this is the method used by AVFoundation to handle rotation of the camera. The buffers are captured, encoded and written to file in landscape orientation. If the device is rotated to portrait mode, a transform matrix is written out to the file to indicate that the video should be rotated for playback. At the same time, the preview layer is also rotated to match the device orientation. This is efficient and works in most cases. However, there isn’t a way to pass this transform matrix to an RTP client, so the view on a remote player will not match the preview on the device if it is rotated away from the base camera orientation. The solution is to rotate the pixel buffers after receiving them from the capture output and before delivering them to the encoder. There is a cost to this processing, and this example code does not include this extra step.

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值