Android MediaCodec 资料

Android MediaCodec stuff
Last update:2016-06-08

原文见于:http://www.bigflake.com/mediacodec/

This page is about the Android MediaCodec class, which can be used to encodeand decode audio and video data. It includes a collection of sample code andanswers to frequently-asked questions.

这个页面是关于Android MediaCodec 类,该类用于编码和解码音视频数据。它包含示例代码的集合和FAQ的回答。

As of Marshmallow (API 23), the official documentation is quite detailed and very useful.(This is a huge step up from what was there when this page wascreated, and a significant advancement over the semi-reasonable docs in API21.) The information on that page should be your primary source of information.The code here is expected to work with API 18+, for broad compatibility. Ifyou're specifically targeting Lollipop or Marshmallow, you have optionsavailable that aren't shown here.

截止到棉花糖(API 23),官方文档是相当详尽和有用。(当这个页面被创建的时候,从它开始有了巨大的进步,相比API 21 半合理的文档也是一个明显的进步)。为了更好的适配性,在那个页面上的信息应该是你首选信息来源。如果你的平台目标是棒棒糖或者棉花糖,你应该会有可选择的方案,但是这里不会展示。

Overview

The MediaCodec class first becameavailable in Android 4.1 (API 16). It was added to allow direct access to themedia codecs on the device. As such, it provides a rather "raw"interface. While the MediaCodec class exists in bothJava and C++ sources, only the former is public.

MediaCodec 在Android4.1(API16) 开始可用。它被添加去直接访问设备上的媒体解码器。就其本身而言,它提供了一个相当”原始”的接口。虽然MediaCodec类在Java和C++源代码中都存在,但是只有Java是开放的。

In Android 4.3 (API 18), MediaCodec was expanded to include a way toprovide input through a Surface (via the createInputSurface method). This allows input to comefrom camera preview or OpenGL ES rendering. Android 4.3 was also the firstrelease with MediaCodec tests in CTS, which helps ensure consistent behaviorbetween devices.

在Android4.3(API 18),MediaCodec 被扩展后包含了一个方法,这个方法通过Surface(通过createInputeSurface方法创建)提供输入。该方法允许数据输入来自摄像头预览或者OpenGL ES渲染。Android4.3也是第一个在CTS(Compatibility Test Suite)中进行了MediaCodec测试的版本。CTS有助于保证设备间具有一致性的表现。

 

Android 4.3 also introduced MediaMuxer, which allows the output of the AVC codec(a raw H.264 elementary stream) to be converted to .MP4 format, with or withoutan associated audio stream.

Android 4.3也引入了MediaMuxer类,该类允许AVC编码器(一种裸H.264基本流)的输入转化为.MP4格式。转化过程可以有也可以没有相关联的音频流。

 

Android 5.0 (API 21) introduced"asynchronous mode", which allows an app to provide a callback method that executes as buffers becomeavailable. The bits of code linked from this page don't take advantage of this,because they target API 18+.

Android5.0(API 21) 引入了”异步模式”,它允许应用提供一个回调方法,该方法在缓冲可用时执行。从这个页面链接的详细代码没有利用这个优势,因为这些代码目标平台是API 18+.

Basic Usage

All uses of the"synchronous" MediaCodec API follow a basicpattern:

  • create and configure the MediaCodec object
  • loop until done:
    • if an input buffer is ready:
      • read a chunk of input, copy it into the buffer
    • if an output buffer is ready:
      • copy the output from the buffer
  • release MediaCodec object

基本用法

所有使用“同步”MediaCodec API的都遵循一个基本的模式:

创建和配置MediaCodec对象

循环直到完成:

如果输入缓冲就位:

        读入一大块输入,拷贝进缓冲

如果输出缓冲就位:

        从缓冲中拷贝输出

释放MediaCodec对象

 

A single instance of MediaCodec handles one specific type of data(e.g. MP3 audio or H.264 video), and may encode or decode. It operates on"raw" data, so any file headers (e.g. ID3 tags) must be stripped off.It does not talk to any higher-level system components; it will not play youraudio through out the speaker or receive a stream of video over a network. Itjust takes buffers of data in and spits buffers of data out. (You can use MediaExtractor to strip the wrappers off inmost situations.)

一个MediaCodec实例只能处理一种指定类型的数据(比如MP3音频或者H.264视频),可能是编码器或者是解码器。它操作的是裸数据,所以任何文件头(比如ID3 tags)必须去掉。它不能与任何高水平的系统组件对话,它将不会通过扬声器播放视频或者通过网络接收视频流,它紧紧读入数据然后输出。(在大多数情况下你可以使用MediaExtactor类来去除包装)。

Some codecs are very particular abouttheir buffers. They may need to have a particular memory alignment, or have acertain minimum or maximum size, or it may be important to have a certainnumber of them available. To accommodate the wide range of possibilities,buffer allocation is performed by the codecs themselves, rather than theapplication. You do not hand a buffer with data to MediaCodec. You ask it for a buffer, and if one isavailable, you copy the data in.

一些编解码器对缓存比较挑剔。他们可能需要一个特殊的内存队列或者有特定的大小或者最大值。拥有一定数量的可用的缓存是非常重要的。为了最大范围地适应可能性,缓存分配由解码器自己操作,而不是交给应用。你不能够处理交给MediaCodec带有数据的缓存。你可以询问缓存是否可用,如果可以,你就能拷贝数据了。

This may seem contrary to"zero-copy" principles, but in most cases there will be fewer copiesbecause the codec won't have to copy or adjust the data to meet itsrequirements. In some cases you may be able to use the buffer directly, e.g.read data from disk or network directly into the buffer, so a copy won't benecessary.

这看起来似乎与“零拷贝”原则相悖,但是大多数情况下,拷贝数量是极少的,这是因为编解码器不会拷贝或者调整数据来适应需求。有些情况下,你能够直接使用缓存,比如直接从硬盘或者网络读取数据到缓存,因此拷贝是不必要的。

 

Input to MediaCodec must be done in "access units". Whenencoding H.264 video that means one frame, when decoding that means a singleNAL unit. However, it behaves like a stream in the sense that you can't submita single chunk and expect a chunk to appear shortly from the output. Inpractice, the codec may want to have several buffers queued up before producingany output.

对MediaCodec的输入必须在“存取单元”中进行。编码H.264视频的时候,“存取单元”意味着一帧,解码的时候意味着是一个独立的NAL单元。然而,它表现出来像是一个流,在这个意义上,你不能提交一块数据然后期待从输出中很快地出现一块数据。实际操作中,编解码器在产生任何输出之前可能需要让缓存排队。

 

It is strongly recommended that you startwith sample code, rather than trying to figure it out from the documentation.

强烈推荐你从示例代码开始,而不是尝试从文档中弄明白它。

 

Samples

EncodeAndMuxTest.java (requires 4.3, API 18)

Generates a movie using OpenGL ES.Uses MediaCodec to encode the movie in an H.264elementary stream, and MediaMuxer to convert the stream to a .MP4file.

This was written as if it were a CTS test,but is not part of CTS. It should be straightforward to adapt the code to otherenvironments.

使用OpenGL ES生成一个电影。使用MediaCodec以H.264基本流的形式来编码电影,使用MediaMuxer把流转化为.MP4文件。

这个例子好像是一个CTS测试,但是它不是CTS的一部分。应该调整代码来适应其他环境。

 

CameraToMpegTest.java (requires 4.3, API 18)

Records video from the camera preview andencodes it as an MP4 file. Uses MediaCodec to encode the movie in an H.264elementary stream, and MediaMuxer to convert the stream to a .MP4file. As an added bonus, demonstrates the use of a GLES fragment shader tomodify the video as it's being recorded.

This was written as if it were a CTS test,but is not part of CTS. It should be straightforward to adapt the code to otherenvironments.

从摄像头预览中录取视频,编码视频成MP4文件。使用MediaCodec以一种H.264基本流的形式编码视频,然后使用MediaMuxer把流转为MP4文件。作为附件福利,这个例子阐述了GLES片段着色器的用法,着色器用来更改正在录制的视频。

这个例子好像是一个CTS测试,但是它不是CTS的一部分。应该调整代码来适应其他环境。

 

Android Breakout game recorder patch (requires 4.3, API 18)

This is a patch for Android Breakout v1.0.2 that adds game recording.While the game is playing at 60fps at full screen resolution, a 30fps 720precording is made with the AVC codec. The file is saved in the app-private dataarea, e.g. /data/data/com.faddensoft.breakout/files/video.mp4.

这个示例是 Android Breakout v1.0.2的补充,它增加了游戏录制。当游戏在全屏以60fps的速率进行时,一个30fps、720p的视频会被录制,该视频是由AVC编解码器制作的。该视频文件会存放在应用程序专用的数据区域。

比如  /data/data/com.faddensoft.breakout/files/video.mp4.

 

This is essentially the same as EncodeAndMuxTest.java, but it's part of a full app rather thanan isolated CTS test. One key difference is in the EGL setup, which is done ina way that allows textures to be shared between the display and videocontexts. WARNING: this code has a race condition. See this bug report for details and suggested fix.

这个例子本质上与EncodeAndMuxTest.java是相同的。但是它是整个应用的一部分而不是一个独立的CTS测试。一个重要的不同点在于EGL的创建,它是以一种允许文理在显示和视频上下文共享的方式创建的。警告:这个代码会有竞争情况发生更多细节和修正方法请看这里 this bug report

 

Another approach is to render each gameframe to an FBO texture, and then render a full-screen quad with that twice(once for the display, once for the video). This could be faster for games thatare expensive to render. Examples of both approaches can be found in Grafika'sRecordFBOActivity class.

另外一种方式是渲染每帧游戏画面成一个FBO文理,然后使用这个文理渲染全屏两次(一次是为了显示,一次是为了视频)。这个可能比那些渲染起来比较费时的游戏更快。两种方法都可以在 Grafika's RecordFBOActivity 类中找到。

 

EncodeDecodeTest.java (requires 4.3, API 18)

CTS test. There are three tests that doessentially the same thing, but in different ways. Each test will:

·        Generate video frames

·        Encode frames with AVC codec

·        Decode generated stream

·        Test decoded frames to see if they match the original

The generation, encoding, decoding, andchecking are near-simultaneous: frames are generated and fed to the encoder,and data from the encoder is fed to the decoder as soon as it becomesavailable.

CTS测试。三种方式本质上做同样的事情,但是以不同的方式。每个测试会:

           生成视频帧

           使用AVC编解码器编码视频帧

           解码生成的流

           测试解码的帧是否与原来的匹配

生成、编码、解码和测试是近乎同时的:帧被生成然后喂给编码器,之后数据一旦可用,该数据就从编码器喂给解码器。

The three tests are:

1.   Buffer-to-buffer. Buffers aresoftware-generated YUV frames in ByteBuffer objects, and decoded to the same. This is the slowest (and leastportable) approach, but it allows the application to examine and modify the YUVdata.

2.   Buffer-to-surface. Encoding is again donefrom software-generated YUV data in ByteBuffers, but this time decoding is done to a Surface. Output is checked with OpenGL ES,using glReadPixels().

3.   Surface-to-surface. Frames are generatedwith OpenGL ES onto an input Surface, and decoded ontoa Surface. This is the fastest approach, but mayinvolve conversions between YUV and RGB.

这3个测试是:

1.   缓存到缓存。缓存是软件产生的Bytebuffer对象YUV帧,然后解码。这个是最慢的(最不轻便的)方式,但是它允许应用来测试和更改YUV数据。

2.   缓存到surface.再次编码来自软件产生的ByteBuffers类型的YUV数据,但是这次是解码到surface上。输出使用OpenGL Es检查,使用了glReadPixels()方法。

3.   Surface到surface。帧由OpenGL ES 产生到一个输入Surface,然后解码到一个Surface.这个是最快的方法。但是牵涉到YUV和RGB直接的转换。

Each test is run at three differentresolutions: 720p (1280x720), QCIF (176x144), and QVGA (320x240).

每个测试运行在3个不同的分辨率:720p (1280x720), QCIF (176x144), and QVGA(320x240).

 

The buffer-to-buffer and buffer-to-surfacetests can be built with Android 4.1 (API 16). However, because the CTS testsdid not exist until Android 4.3, a number of devices shipped with brokenimplementations.

缓存到缓存和缓存到Surface测试可以用于Android4.1(API 16)。然而,因为CTS直到Android4.3才可用,一些设备可能不可用。

NOTE: the setByteBuffer() usage may not be strictly correct, as it doesn'tset "csd-1".

注意:setByteBuffer()的使用可能不会非常正确,因为它没有设置“csd-1”。

(For an example that uses the Android 5.xasynchronous API, see mstorsjo's android-decodeencodetest project.)

(使用Android5.x异步API的例子可见于mstorsjo's android-decodeencodetest 的项目。)

DecodeEditEncodeTest.java (requires 4.3, API 18)

CTS test. The test does the following:

·        Generate a series of video frames, and encode them withAVC. The encoded data stream is held in memory.

·        Decode the generated stream with MediaCodec, using an output Surface.

·        Edit the frame (swap green/blue color channels) with anOpenGL ES fragment shader.

·        Encode the frame with MediaCodec, using an input Surface.

·        Decode the edited video stream, verifying the output.

CTS测试。测试包括以下内容:    

1.   生成一系列视频帧,然后只用AVC编码。编码的数据置于内存中

2.   使用MediaCodec解码生成的流,使用Surface作为输出。

3.   使用OpenGL ES 片段渲染器编辑帧(交换绿、蓝通道)。

4.   使用MediaCodec编码帧,使用Surface作为输入。

5.   解码编辑过的流,验证输出。

The middle decode-edit-encode passperforms decoding and encoding near-simultaneously, streaming frames directlyfrom the decoder to the encoder. The initial generation and final verificationare done as separate passes on video data held in memory.

中间的编码-编辑-解码的过程在执行编码和解码时是几乎同步的,帧直接从解码器到编码器。初始化发生器和最后的验证在独立的过程中,视频数据在内存中。

Each test is run at three differentresolutions: 720p (1280x720), QCIF (176x144), and QVGA (320x240).

每个测试运行在3个不同的分辨率:720p (1280x720), QCIF (176x144), and QVGA(320x240).

No software-interpreted YUV buffers areused. Everything goes through Surface. There will beconversions between RGB and YUV at certain points; how many and where theyhappen depends on how the drivers are implemented.

NOTE: for this and the other CTS tests,you can see related classes by editing the class URL. For example, to seeInputSurface and OutputSurface, remove "DecodeEditEncodeTest.java"from the URL, yielding this directory link.

注意:对于这个和其他的CTS测试,你可以通过编辑类的URL看到相关的类。举个栗子,为了看到InputSurface和OutputSurface,从URL中移除“DecodeEditEncodeTest.java”,产生这个目录链接。

ExtractMpegFramesTest.java (requires 4.1, API 16) 
ExtractMpegFramesTest.java (requires 4.2, API 17)

Extracts the first 10 frames of video froma .mp4 file and saves them to individual PNG files in /sdcard/. Uses MediaExtractor to extract the CSD data andfeed individual access units into a MediaCodec decoder. The frames are decodedto a Surface created from SurfaceTexture, rendered (off-screen) into apbuffer, extracted withglReadPixels(), and saved to a PNG file with Bitmap#compress().

抽取一个MP4文件的前10帧,把这10帧在/sdcard/目录下保存成PNG文件。使用MediaExtractor来抽取CSD数据,把独立的访问单元喂给MediaCodec解码器。帧被解码到从SurfaceTexture中创建的Surface中,渲染(看不到)到一个pbuffer中,使用glReadPixels()抽取,使用Bitmap#compress()保存成PNG文件。

Decoding the frame and copying it intoa ByteBuffer with glReadPixels() takes about 8ms on the Nexus 5, easily fast enoughto keep pace with 30fps input, but the additional steps required to save it todisk as a PNG are expensive (about half a second). The cost of saving a framebreaks down roughly like this (which you can get by modifying the test toextract full-size frames from 720p video on a Nexus 5, observing the total timerequired to save 10 frames, and doing successive runs with later stagesremoved; or by instrumenting with android.os.Trace and using systrace):

·        0.5% video decode (hardware AVC codec)

·        1.5% glReadPixels() into a direct ByteBuffer

·        1.5% copying pixel data from ByteBuffer to Bitmap

·        96.5% PNG compression and file I/O

解码一帧并使用glReadPixels()把它拷贝进ByteBuffer中在Nexus5上大概花费8ms,与30fps的输入保持同步是足够快的,但是额外的要求存PNG文件到硬盘的步骤是相当耗费时间的(大概半秒钟)。保存帧的时间花费粗略如下所示(这个结果可以通过修改测试在Nexus5中从720p视频抽取全屏的帧获得,可以通过移除后面几部成功实现或者使用工具android.os.Trace和systrace):

1.   0.5% 视频解码(AVC硬解)

2.   1.5% glReadPixels()到直接的ByteBuffer

3.   1.5% 从ByteBuffer到Bitmap拷贝像素数据

4.   96.5%PNG压缩和文件I/O

In theory, a Surface from the API 19 ImageReader class could be passed tothe MediaCodec decoder, allowing direct access tothe YUV data. As of Android 4.4, theMediaCodec decoder formats are not supported by ImageReader.

理论上,从API19 surface ImgaeReader类可以被传递给Media 解码器,允许对YUV数据的知己存取。对与Android4.4,MediaCodec解码器的格式不被ImgaeReader支持。

One possible way to speed up transfer ofRGB data would be to copy the data asynchronously through a PBO, but in the current implementation thetransfer time is dwarfed by the subsequent PNG activity, so there's littlevalue in doing so.

一种可能的方法去加速RGB数据的转移,这种方法可能要通过一个PBO异步的拷贝数据,但是目前的实现方法,转移时间被次要的PNG活动拖累,因此这样做价值不大。

The two versions of the source codefunction identically. One was written against EGL 1.0, the other EGL 1.4. EGL1.4 is a little easier to work with and has some features that other examplesuse, but if you want your app to work on Android 4.1 you can't use it.

源代码的2个版本作用不同,一个是针对EGL1.0,另一个是EGL1.4。EGL 1.4 相对比较容易操作,拥有一些其他示例代码使用的特性,但是如果想让你应用程序在Android4.1上运行,你是不能使用它的。

This was written as if it were a CTS test,but is not part of CTS. It should be straightforward to adapt the code to otherenvironments.

这个例子好像是一个CTS测试,但是它不是CTS的一部分。应该调整代码来适应其他环境。

(NOTE: if you're havingtrouble with timeouts in awaitNewImage(), see this article.)

(注意:如果你在awaitNewImage()的timeout上有疑问,看这个文章this article

DecoderTest.java (requires 4.1, API 16)

CTS test. Tests decoding pre-recordedaudio streams.

CTS测试.测试解码预先录制的音频流。

EncoderTest.java (requires 4.1, API 16)

CTS test. Tests encoding of audio streams.

CTS测试。测试解码音频流

MediaMuxerTest.java (requires 4.3, API 18)

CTS test. Uses MediaMuxer to clone theaudio track, video track, and audio+video tracks from input to output.

CTS测试。使用MediaMuxer克隆音频音轨和视频音轨,然后从输入到输出合并音视频音轨

screenrecord (uses non-public native APIs)

You can access the source code for screenrecord, a developer shell command introduced inAndroid 4.4 (API 19). It demonstrates the use of the native equivalents ofMediaCodec and MediaMuxer in a pure-native command. v1.1 uses GLES and thenative equivalent of SurfaceTexture.

你可以获取到screenrecord的源代码,一个开发者命令在Android4.4(API19)被引进。它阐述了在纯本地命令中与MediaCodec和MediaMuxer本地等价的使用。V1.1使用GLES和SurfaceTextuer的本地等价

This is FOR REFERENCE ONLY. Non-publicAPIs are very likely to break between releases and are not guaranteed to haveconsistent behavior across different devices.

这仅供参考。非官方API很可能在releases间不一样,在不同的设备也是不被保证有一致的表现。

Grafika (requires 4.3, API 18)

Test application exercising variousgraphics & media features. Examples include recording and displaying camerapreview, recording OpenGL ES rendering, decoding multiple videossimultaneously, and the use of SurfaceView, GLSurfaceView, and TextureView. Highly unstable. Tons of fun. Unlike most of thesamples here, Grafika is a complete application, so it's easier to try thingsyourself. The EGL/GLES code is also more refined, and better suited forinclusion in other projects.

使用多种图片和媒体特性测试应用。栗子包括录制和显示摄像头预览,录制OpenGL ES的渲染,同步解码多种视频,以及SurfaceView、GLSurfaceView、Textureview的使用。高度不稳定。但是很有趣。不像这里大部分的示例,Grafika是一个完整的应用,因此更容易自己尝试。EGL/GLES代码也更加精炼,在其他项目中更加适合包含进去。

FAQ

Q1. How do Iplay the video streams created by MediaCodec with the "video/avc" codec?

怎样播放通过有“video/avc”的编解码器的MediaCodec创建的视频流?

A1. The stream created is a raw H.264elementary stream. The Totem Movie Player for Linux may work, but many otherplayers won't touch them. You need to use the MediaMuxerclass to create an MP4 file instead. See the EncodeAndMuxTest sample.

 创建的视频流是裸H.264基础流。Linux版本的Totem播放器可能播放,但是其他大多数播放器不能播放。你需要使用MediaMuxer类来创建MP4文件来代替。看这个 EncodeAndMuxTest sample

Q2. Why does my call to MediaCodec.configure() fail with an IllegalStateException when I try tocreate an encoder?

为什么我尝试创建解码器的时候调用 MediaCodec.configure() 失败,并且会有IllegalStateException 。

A2. This is usually because you haven'tspecified all of the mandatory keys required by the encoder. See this stackoverflow item for an example.

 这很可能是因为你没有详细的配置解码器需要的必须的关键字。看这个例子this stackoverflow item

Q3. My video decoder is configured butwon't accept data. What's wrong?

我的视频解码器配置好了,但是不接受数据。哪里出错了?

A3. A common mistake is neglecting toset the Codec-Specific Data, mentioned briefly in the documentation, throughthe keys "csd-0" and "csd-1". This is a bunch of raw datawith things like Sequence Parameter Set and Picture Parameter Set; all you usuallyneed to know is that the MediaCodec encoder generates them and the MediaCodec decoder wants them.

常见错误是忽视了设置Codec-Specific Data,在文档中有简短提到,通过关键字”csd-0”和”csd-1”.这是一块裸数据比如SPS和PPS;你所需要知道是MediaCodec解码器生成它们,解码器需要它们。

If you are feeding the output of the encoderto the decoder, you will note that the first packet you get from the encoderhas the BUFFER_FLAG_CODEC_CONFIG flag set. You need to make sure youpropagate this flag to the decoder, so that the first buffer the decoderreceives does the setup. Alternatively, you can set the CSD data in the MediaFormat, and pass this into the decoder via configure(). 如果你把编码器输出的数据喂给解码器,你会注意到你从编码器中获得的第一个包有设置 BUFFER_FLAG_CODEC_CONFIG标志。你需要确认把这个标志传给解码器,因此解码器收到的第一个buffer用来作为设置。可以选择的是,你可以在MeidiaFormat中设置CSD数据,然后通过configure()把它传递给解码器。

You can see examples of both approaches inthe EncodeDecodeTest sample.

这2种方法可见于EncodeDecodeTest sample.

If you're not sure how to set this up, youshould probably be using MediaExtractor, which will handle it all for you.

 如果你不确定怎样设置,你可能应该使用MediaExtractor,它会帮你处理一切。

Q4. Can I stream data into thedecoder?

怎样把数据以流的形式传入解码器?

A4. Yes and no. The decoder takes astream of "access units", which may not be a stream of bytes. For thevideo decoder, this means you need to preserve the "packetboundaries" established by the encoder (e.g. NAL units for H.264 video).For example, see how the VideoChunks class in the DecodeEditEncodeTest sample operates. Youcan't just read arbitrary chunks of the file and pass them in.

 可以也不可以。解码器处理“访问单元”的流,它可能不是字节流。对于视频解码器,它意味着你需要保留由编码器创建的“包界定符”(e.g.NAL units for H.264 video).举个栗子,看在 DecodeEditEncodeTest sample VIdeoChunks类中如何操作的。你不能够任意读取文件的块,然后把它传给解码器。

Q5. I'm encoding the output of thecamera through a YUV preview buffer. Why do the colors look wrong?

解码YUV预览buffer中的摄像头的输出,为什么颜色看起来不对?

A5. The color formats for the cameraoutput and the MediaCodec encoder input are different. Camerasupports YV12 (planar YUV 4:2:0) and NV21 (semi-planar YUV 4:2:0).

对于摄像头的颜色格式和MediaCodec的编码输入的颜色格式是不一样的。摄像头支持YV12(planar YUV 4:2:0)和NV21 (semi-planar YUV 4:2:0).

TheMediaCodec encoders support one or more of:

  • #19 COLOR_FormatYUV420Planar (I420)
  • #20 COLOR_FormatYUV420PackedPlanar (also I420)
  • #21 COLOR_FormatYUV420SemiPlanar (NV12)
  • #39 COLOR_FormatYUV420PackedSemiPlanar (also NV12)
  • #0x7f000100 COLOR_TI_FormatYUV420PackedSemiPlanar (also also NV12)

MediaCodec编码器支持以下一种或几种:

  • #19 COLOR_FormatYUV420Planar (I420)
  • #20 COLOR_FormatYUV420PackedPlanar (also I420)
  • #21 COLOR_FormatYUV420SemiPlanar (NV12)
  • #39 COLOR_FormatYUV420PackedSemiPlanar (also NV12)
  • #0x7f000100 COLOR_TI_FormatYUV420PackedSemiPlanar (also also NV12)

I420 has the same generaldata layout as YV12, but the Cr and Cb planes are reversed. Same with NV12 vs.NV21. So if you try to hand YV12 buffers from the camera to an encoderexpecting something else, you'll see some odd color effects, like in these images.

I420与YV12有相同的普通的数据层,但是Cr和Cb planes是相反的。这在NV12和NV21中是一样的。因此如果你尝试处理来自摄像头的YV12缓存给编码器,你会看到一些奇怪的颜色的影响,比如这些图片these images

As of Android 4.4 (API 19), there is stillno common input format. Nvidia Tegra 3 devices like the Nexus 7 (2012), andSamsung Exynos devices like the Nexus 10, wantCOLOR_FormatYUV420Planar. Qualcomm Adreno devices like the Nexus4, Nexus 5, and Nexus 7 (2013) want COLOR_FormatYUV420SemiPlanar. TI OMAP devices like the Galaxy NexuswantCOLOR_TI_FormatYUV420PackedSemiPlanar. (This is based on the format that isreturned first when the AVC codec is queried.)

对于Android4.4(API19),依然米有共同的输入格式。Nvidia Tegra 3设备比如Nexus 7 (2012)和Samsung Exynos设备比如Nexus 10,需要COLOR_FormatYUV420Planar。Qualcomm Adreno设备比如 Nexus 4, Nexus 5, andNexus 7 (2013)需要COLOR_FormatYUV420SemiPlanar。TI OMAP设备比如Galaxy Nexus需要COLOR_TI_FormatYUV420PackedSemiPlanar。(这基于当AVC解码器询问时返回的第一帧的格式)

A more portable, and more efficient,approach is to use the API 18 Surface input API,demonstrated in the CameraToMpegTest sample. The down side of this isthat you have to operate in RGB rather than YUV, which is a problem for imageprocessing software. If you can implement the image manipulation in a fragmentshader, perhaps by converting between RGB and YUV before and after yourcomputations, you can take advantage of code execution on the GPU.

一种更加便捷的和更高效的方法是使用API18的surface输入API,这里有详细解释CameraToMpegTest sample。不好的一方面是你不得不操作RGB而不是YUV,这在软件处理图像上是一个问题。如果你能使用片段着色器实现图像的操作,或许通过在你的计算之前进行RGB和YUV之间转换,你可以利用GPU在代码执行上的优势。

Note that the MediaCodec decoders may producedata in ByteBuffers using one of the above formats or in a proprietary format.For example, devices based on Qualcomm SoCs commonly use OMX_QCOM_COLOR_FormatYUV420PackedSemiPlanar32m (#2141391876 / 0x7FA30C04).

注意,MediaCodec解码器使用上述一种格式或者一种专有的格式可能生成ByteBuffer形式的数据。举个栗子,基于Qualcomm SoCs的设备一般使用OMX_QCOM_COLOR_FormatYUV420PackedSemiPlanar32m (#2141391876 / 0x7FA30C04).

Surface input uses COLOR_FormatSurface, also known as OMX_COLOR_FormatAndroidOpaque (#2130708361 / 0x7F000789). For thefull list, see OMX_COLOR_FORMATTYPE in OMX_IVCommon.h.

Surface输入使用COLOR_FormatSurface,也被认为是OMX_COLOR_FormatAndroidOpaque (#2130708361 / 0x7F000789),全部的清单可见于此 OMX_IVCommon.h

As of API 21 you can work with an Image object instead. See theMediaCodec getInputImage() and getOutputImage() calls.

 对于API21你可能需要用Image对象代替。看MediaCodec的getInputImage()和 getOutputImage() 的调用。

Q6. What's this EGL_RECORDABLE_ANDROID flag?

什么是EGL_RECORDABLE_ANDROID 标志?

A6. That tells EGL that the surface itcreates must be compatible with the video codecs. Without this flag, EGL mightuse a buffer format that MediaCodec can't understand.

 它告诉EGL,EGL创建的surface必须与视屏编解码器兼容。没有这个标志,EGL可能使用缓存格式,而这个而是MediaCodec不能理解。

Q7. Can I use the ImageReader class with MediaCodec?

我可以与MediaCodec一起使用ImageReader吗?

A7. Maybe. The ImageReader class, added in Android 4.4 (API19), provides a handy way to access data in a YUV surface. Unfortunately, as ofAPI 19 it only works with buffers fromCamera (though thatmay have been corrected in API 21, when MediaCodec added getOutputImage()). Also, there was no corresponding ImageWriter class for creating content until API23.

 可能。在4.4中添加的ImagerReader类,提供了了一种方便的方法来访问YUVsurface中的数据。不幸的是,对于API19它只能和摄像头的buffers一起工作(虽然这在API21中已经被改正,当MediaCodec添加了getOutputImage()方法的时候)。直到API23才有符合的ImageWriter类用来创建内容。

Q8. Do I have to set a presentationtime stamp when encoding video?

编码视频的时候我必须设置presentation time stamp?

A8. Yes. It appears that some deviceswill drop frames or encode them at low quality if the presentation time stampisn't set to a reasonable value (see this stackoverflow item).

是的。如果PTS没有设置给一个合理的值,会出现在一些设备上可能出现掉帧或者低质量编码,

Remember that the time required by MediaCodec is in microseconds. Most timestampspassed around in Java code are in milliseconds or nanoseconds.

记住,MediaCodec需要的时间以微妙计。大多数的时间标签在java中是毫秒或者是纳秒。 

Q9. Most of the examples require API18. I'm coding for API 16. Is there something I should know?

大多数的示例要求API18,我的代码基于API16,有什么需要我注意的吗?

A9. Yes. Some key features aren'tavailable until API 18, and some basic features are more difficult to use inAPI 16.

是的。一些关键特性直到API18才可用,一些基本特性在API16中用起来也是相当困难。

If you're decoding video, things don'tchange much. As you can see from the two implementations of ExtractMpegFramesTest, the newer version of EGLisn't available, but for many applications that won't matter.

如果你在解码视频,事情没有改变那么多。正如你在的ExtractMpegFramesTest2种实现中看到的,更新版本的EGL不可用,但是对于大多数应用这并不重要。

If you're encoding video, things are muchworse. Three key points:

1.   The MediaCodec encoders don't accept input from a Surface, so youhave to provide the data as raw YUV frames.

2.   The layout of the YUV frames varies fromdevice to device, and in some cases you have to check for specific vendors byname to handle certain qirks.

3.   Some devices may not advertise support forany usable YUV formats (i.e. they're internal-use only).

4.   The MediaMuxer class doesn't exist, so there's no way to convertthe H.264 stream to something that MediaPlayer (or many desktop players) will accept. You have touse a 3rd-party library (perhaps mp4parser).

5.   When the MediaMuxer class was introduced in API 18, the behaviorof MediaCodec encoders was changed to emit INFO_OUTPUT_FORMAT_CHANGED at the start, so that you have aconvenient MediaFormat to feed to the muxer. On olderversions of Android, this does not happen.

如果你在编码视频,事情将会更加糟糕。下面是关键点:

1.   MediaCodec编码器 不接受surface作为输入,所以你必须提供裸YUV帧的数据

2.   YUV帧的layout不同设备是不同的,某些情况下你不得不通过名字检查特殊的Vendor来处理特殊的怪事。

3.   一些设备可能不会通知任何可用的YUV格式的支持(仅仅是供内部使用)

4.   MediaMuxer 类不存在,因此没办法把H.264流转为MediaPlayer (或者许多桌面播放器)可以接受的东西。你不得不使用第三方的库(或许是mp4parser

5.   当MediaMuxer类在API18被引入时,MediaCodec编码器的表现是在开始时变成了放出INFO_OUTPUT_FORMAT_CHANGED,因此你有一个方便的MediaFormat喂给muxer.在老版本Android上,这是不会出现的。

This stackoverflow item has additional links andcommentary.

The CTS tests for MediaCodec were introduced with API 18 (Android4.3), which in practice means that's the first release where the basic featuresare likely to work consistently across devices. In particular, pre-4.3 deviceshave been known to drop the last frame or scramble PTS values when decoding.

 对于MediaCodec的CTS测试是在API18(Android4.3)中引入的,它实际上意味着它的第一个release版本其基本特性很可能是工作在跨平台的。尤其是,4.3之前的版本在解码的时候存在丢掉最后一帧或者打乱时间戳的情况

Q10. Can I use MediaCodec in the AOSPemulator?

我可以在AOSP模拟器中使用MediaCodec吗?

A10. Maybe. The emulator provides asoftware AVC codec that lacks certain features, notably input from a Surface(although it appears that this may now be fixed in Android 5.0"Lollipop"). Developing on a physical device will likely be lessfrustrating.

 或许。模拟器提供了软件AVC编解码器缺乏特定的特性,值得注意的是Surface作为输入(虽然在Android5.0“Lollipop”中该问题被修复)。在真实设备上做开发可能会遇到更少沮丧的情况。

Q11. Why is the output messed up (allzeroes, too short, etc)?

为什么输出都乱了(都是0,太短等等)

A11. The most common mistake is failingto adjust the ByteBuffer position and limit values. As of API19, MediaCodec does not do this for you.

常见的错误是没能调整ByteBuffer的position和限制其值。对于API19,MediaCodec并不会为你这样做。

You need to do something like:

  intbufIndex = codec.dequeueOutputBuffer(info, TIMEOUT);

 ByteBuffer outputData = outputBuffers[bufIndex];

  if(info.size != 0) {

     outputData.position(info.offset);

     outputData.limit(info.offset + info.size);

  }

On the input side, you want to call clear() on the buffer before copying datainto it.

Q12. Why am I seeing storeMetaDataInBuffers failures in thelog?

A12. They look like this (example from aNexus 5):

E OMXNodeInstance: OMX_SetParameter() failedfor StoreMetaDataInBuffers: 0x8000101a

E ACodec : [OMX.qcom.video.encoder.avc] storeMetaDataInBuffers (output) failed w/err -2147483648

You can ignore them, they're harmless.

 

Further Assistance

Please post all questions to stackoverflow withthe android tag (and, for MediaCodec issues, the mediacodec tag as well). Comments or featurerequests for the framework or CTS tests should be made on the AOSP bug tracker.

请发布所有的带有Android标签(当然还有MediaCodec相关事项和MediaCodec标签)的问题到stackoverflow 上。评论或者对于framework特性的问题或者CTS测试应该在the AOSP bug tracker中创建。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值