MediaCodec官方文档翻译

MediaCodec是Android用于访问媒体编解码器的类,支持异步处理数据,使用输入输出缓冲区。它可以处理压缩数据、原始音频和视频数据。在处理原始视频数据时,使用Surface能提高性能。MediaCodec的生命周期包括初始化、执行和释放,执行状态又分为Flushed、Running和End-of-Stream。处理数据的方式包括异步和同步模式,异步模式下通过回调接收缓冲区,同步模式下通过调用相关方法获取。处理结束时,需通过指定BUFFER_FLAG_END_OF_STREAM标志告知编解码器输入数据结束。
摘要由CSDN通过智能技术生成

MediaCodec


public final class MediaCodec extends Object

java.lang.Object ↳ android.media.MediaCodec

 

MediaCodec class can be used to access low-level media codecs, i.e. encoder/decoder components. It is part of the Android low-level multimedia support infrastructure (normally used together with MediaExtractor, MediaSync, MediaMuxer, MediaCrypto, MediaDrm, Image, Surface, and AudioTrack.)

译:MediaCodec类可用于访问低级媒体编解码器,即编码器/解码器组件。它是Android低级多媒体支持基础架构的一部分,(通常与MediaExtractor,MediaSync,MediaMuxer,MediaCrypto,MediaDrm,Image,Surface和AudioTrack一起使用)。

 In broad terms, a codec processes input data to generate output data. It processes data asynchronously and uses a set of input and output buffers. At a simplistic level, you request (or receive) an empty input buffer, fill it up with data and send it to the codec for processing. The codec uses up the data and transforms it into one of its empty output buffers. Finally, you request (or receive) a filled output buffer, consume its contents and release it back to the codec. 

译:广义而言,编解码器处理输入数据以生成输出数据。 它异步处理数据,并使用一组输入和输出缓冲区。 在一个简单的级别上,您请求(或接收)一个空的输入缓冲区,将其填充数据并将其发送到编解码器进行处理。 编解码器用完了数据并将其转换为空的输出缓冲区之一。 最后,您请求(或接收)已填充的输出缓冲区,使用其内容并将其释放回编解码器。 

 

Data Types


Codecs operate on three kinds of data: compressed data, raw audio data and raw video data. All three kinds of data can be processed using ByteBuffer, but you should use a Surface for raw video data to improve codec performance. Surface uses native video buffers without mapping or copying them to ByteBuffers; thus, it is much more efficient. You normally cannot access the raw video data when using a Surface, but you can use the ImageReader class to access unsecured decoded (raw) video frames. This may still be more efficient than using ByteBuffers, as some native buffers may be mapped into ByteBuffer#isDirect ByteBuffers. When using ByteBuffer mode, you can access raw video frames using the Image class and getInput/OutputImage(int).(编解码器对三种数据进行操作:压缩数据,原始音频数据和原始视频数据。 可以使用ByteBuffer处理所有三种数据,但是对于原始视频数据,应该使用Surface来提高编解码器性能。 Surface使用本机视频缓冲区,而不将其映射或复制到ByteBuffer; 因此,它效率更高。 使用Surface时,通常不能访问原始视频数据,但是可以使用ImageReader类访问不安全的解码(原始)视频帧。 这可能仍然比使用ByteBuffer更为有效,因为某些本机缓冲区可能会映射到ByteBuffer#isDirect ByteBuffers中。 使用ByteBuffer模式时,可以使用Image类和getInput / OutputImage(int)访问原始视频帧。)

 

Compressed Buffers

Input buffers (for decoders) and output buffers (for encoders) contain compressed data according to the MediaFormat#KEY MIME. For video types this is normally a single compressed video frame. For audio data this is normally a single access unit (an encoded audio segment typically containing a few milliseconds of audio as dictated by the format type), but this requirement is slightly relaxed in that a buffer may contain multiple encoded access units of audio. In either case, buffers do not start or end on arbitrary byte boundaries, but rather on frame/access unit boundaries unless they are flagged with BUFFERFLAG PARTIALFRAME.(输入缓冲区(用于解码器)和输出缓冲区(用于编码器)包含根据MediaFormat#KEYMIME压缩的数据。 对于视频类型,这通常是单个压缩的视频帧。 对于音频数据,这通常是单个访问单元(一个编码的音频段,通常包含几毫秒的音频,如格式类型所规定),但是由于缓冲区中可能包含多个编码的音频访问单元,因此这一要求稍有放松。 无论哪种情况,缓冲区都不会在任意字节边界处开始或结束,而是在帧/访问单元边界处开始或结束,除非使用BUFFERFLAG PARTIALFRAME对其进行标记。)

Raw Audio Buffers

Raw audio buffers contain entire frames of PCM audio data, which is one sample for each channel in channel order. Each PCM audio sample is either a 16 bit signed integer or a float, in native byte order. Raw audio buffers in the float PCM encoding are only possible if the MediaFormat's MediaFormat#KEYPCMENCODING is set to AudioFormat#ENCODINGPCMFLOAT during MediaCodec configure(…) and confirmed by getOutputFormat() for decoders or getInputFormat() for encoders. A sample method to check for float PCM in the MediaFormat is as follows:(原始音频缓冲区包含PCM音频数据的整个帧,这是按通道顺序每个通道的一个样本。 每个PCM音频样本都是16位带符号整数或浮点数(以本机字节顺序)。 只有在MediaCodec configure(…)期间将MediaFormat的MediaFormat#KEYPCMENCODING设置为AudioFormat#ENCODINGPCMFLOAT并由解码器的getOutputFormat()或编码器的getInputFormat()确认时,才可以使用浮点PCM编码的原始音频缓冲区。 检查MediaFormat中的float PCM的示例方法如下:) 

static boolean isPcmFloat(MediaFormat format) {
        return format.getInteger(MediaFormat.KEY_PCM_ENCODING, AudioFormat.ENCODING_PCM_16BIT)== AudioFormat.ENCODING_PCM_FLOAT;
    }

In order to extract, in a short array, one channel of a buffer containing 16 bit signed integer audio data, the following code may be used:(为了在短数组中提取包含16位带符号整数音频数据的缓冲区的一个通道,可以使用以下代码:) 

 

 //Assumes the buffer PCM encoding is 16 bit.
    short[] getSamplesForChannel(MediaCodec codec, int bufferId, int channelIx) {
        ByteBuffer outputBuffer = codec.getOutputBuffer(bufferId);
        MediaFormat format = codec.getOutputFormat(bufferId);
        ShortBuffer samples = outputBuffer.order(ByteOrder.nativeOrder()).asShortBuffer();
        int numChannels = format.getInteger(MediaFormat.KEY_CHANNEL_COUNT);
        if (channelIx < 0 || channelIx >= numChannels) {
            return null;
        }
        short[] res = new short[samples.remaining() / numChannels];
        for (int i = 0; i < res.length; ++i) {
            res[i] = samples.get(i * numChannels + channelIx);
        }
        return res;
  }

Raw Video Buffers

In ByteBuffer mode video buffers are laid out according to their MediaFormat#KEYCOLORFORMAT. You can get the supported color formats as an array from getCodecInfo().MediaCodecInfo#getCapabilitiesForType.CodecCapabilities#colorFormats. Video codecs may support three kinds of color formats:(在ByteBuffer模式下,视频缓冲区根据其MediaFormat#KEYCOLORFORMAT进行布局。 您可以从getCodecInfo()。MediaCodecInfo#getCapabilitiesForType.CodecCapabilities#colorFormats获取作为数组的受支持颜色格式。 视频编解码器可能支持三种颜色格式:)

  1. native raw video format: This is marked by CodecCapabilities#COLORFormatSurface and it can be used with an input or output Surface.(本地原始视频格式:由CodecCapabilities#COLORFormatSurface标记,可以与输入或输出Surface一起使用。)
  2. flexible YUV buffers (such as CodecCapabilities#COLORFormatYUV420Flexible): These can be used with an input/output Surface, as well as in ByteBuffer mode, by using getInput/OutputImage(int)(灵活的YUV缓冲区(例如CodecCapabilities#COLORFormatYUV420Flexible):通过使用getInput / OutputImage(int),它们可以与输入/输出Surface以及ByteBuffer模式一起使用。)
  3. other, specific formats: These are normally only supported in ByteBuffer mode. Some color formats are vendor specific. Others are defined in CodecCapabilities. For color formats that are equivalent to a flexible format, you can still use getInput/OutputImage(int).(其他特定格式:通常仅在ByteBuffer模式下支持这些格式。 某些颜色格式是特定于供应商的。 其他在CodecCapabilities中定义。 对于等效于灵活格式的颜色格式,您仍然可以使用getInput / OutputImage(int)

All video codecs support flexible YUV 4:2:0 buffers since Build.VERSIONCODES.LOLLIPOPMR1.(自Build.VERSIONCODES.LOLLIPOPMR1起,所有视频编解码器均支持灵活的YUV 4:2:0缓冲区。)

Accessing Raw Video ByteBuffers on Older Devices(在较旧的设备上访问原始视频字节缓冲区)

Prior to Build.VERSIONCODES.LOLLIPOP and Image support, you need to use the MediaFormat#KEYSTRIDE and MediaFormat#KEYSLICEHEIGHT output format values to understand the layout of the raw output buffers.(在支持Build.VERSIONCODES.LOLLIPOP和Image之前,您需要使用MediaFormat#KEYSTRIDE和MediaFormat#KEYSLICEHEIGHT输出格式值来了解原始输出缓冲区的布局。)

Note that on some devices the slice-height is advertised as 0. This could mean either that the slice-height is the same as the frame height, or that the slice-height is the frame height aligned to some value (usually a power of 2). Unfortunately, there is no standard and simple way to tell the actual slice height in this case. Furthermore, the vertical stride of the U plane in planar formats is also not specified or defined, though usually it is half of the slice height.(请注意,在某些设备上,slice-height公告为0。这可能意味着slice-height与框架高度相同,或者slice-height是与某个值对齐的框架高度(通常为 2)。 不幸的是,在这种情况下,没有标准和简单的方法可以知道实际的切片高度。 此外,虽然通常是切片高度的一半,但也未指定或定义平面格式的U平面的垂直步幅。)

The MediaFormat#KEYWIDTH and MediaFormat#KEYHEIGHT keys specify the size of the video frames; however, for most encondings the video (picture) only occupies a portion of the video frame. This is represented by the 'crop rectangle'.(MediaFormat#KEYWIDTH和MediaFormat#KEYHEIGHT键指定视频帧的大小; 但是,在大多数情况下,视频(图片)仅占据视频帧的一部分。 这由“裁剪矩形”表示。)

You need to use the following keys to get the crop rectangle of raw output images from the output format. If these keys are not present, the video occupies the entire video frame.The crop rectangle is understood in the context of the output frame before applying any MediaFormat#KEYROTATION.(您需要使用以下键从输出格式获取原始输出图像的裁剪矩形。 如果不存在这些键,则视频将占据整个视频帧。在应用任何MediaFormat#KEYROTATION之前,应在输出帧的上下文中理解裁剪矩形。)

The size of the video frame (before rotation) can be calculated as such:

 MediaFormat format = decoder.getOutputFormat(…);
 int width = format.getInteger(MediaFormat.KEY_WIDTH);
 if (format.containsKey("crop-left") && format.containsKey("crop-right")) {
    width = format.getInteger("crop-right") + 1 - format.getInteger("crop-left");
 }
 int height = format.getInteger(MediaFormat.KEY_HEIGHT);
 if (format.containsKey("crop-top") && format.containsKey("crop-bottom")) {
    height = format.getInteger("crop-bottom") + 1 - format.getInteger("crop-top");
 }

Also note that the meaning of BufferInfo#offset was not consistent across devices. On some devices the offset pointed to the top-left pixel of the crop rectangle, while on most devices it pointed to the top-left pixel of the entire frame.(另请注意,BufferInfo#offset的含义在设备之间不一致。 在某些设备上,偏移量指向裁剪矩形的左上角像素,而在大多数设备上,偏移量指向整个帧的左上角像素。)

 States


During its life a codec conceptually exists in one of three states: Stopped, Executing or Released. The Stopped collective state is actually the conglomeration of three states: Uninitialized, Configured and Error, whereas the Executing state conceptually progresses through three sub-states: Flushed, Running and End-of-Stream.(在其生命周期内,编解码器从概念上讲处于以下三种状态之一:停止,执行或释放。 停止的集体状态实际上是三个状态的集合:未初始化,已配置和错误,而执行状态在概念上通过三个子状态进行处理:刷新状态,运行状态和流结束。

When you create a codec using one of the factory methods, the codec is in the Uninitialized state. First, you need to configure it via configure(…), which brings it to the Configured state, then call start() to move it to the Executing state. In this state you can process data through the buffer queue manipulation described above.(使用工厂方法之一创建编解码器时,编解码器处于未初始化状态。 首先,您需要通过configure(…)对其进行配置,使它进入已配置状态,然后调用start()将其移至执行状态。 在这种状态下,您可以通过上述缓冲区队列操作来处理数据。)

The Executing state has three sub-states: Flushed, Running and End-of-Stream. Immediately after start() the codec is in the Flushed sub-state, where it holds all the buffers. As soon as the first input buffer is dequeued, the codec moves to the Running sub-state, where it spends most of its life. When you queue an input buffer with the end-of-stream marker, the codec transitions to the End-of-Stream sub-state. In this state the codec no longer accepts further input buffers, but still generates output buffers until the end-of-stream is reached on the output. You can move back to the Flushed sub-state at any time while in the Executing state using flush(). (执行状态具有三个子状态:Flushed,Running和Stream-of-Stream。 在start()之后,编解码器立即处于Flushed子状态,其中包含所有缓冲区。 一旦第一个输入缓冲区出队,编解码器将移至“运行”子状态,在此状态下将花费大部分时间。 当您将输入缓冲区与流结束标记排队时,编解码器将转换为流结束子状态。 在这种状态下,编解码器将不再接受其他输入缓冲区,但仍会生成输出缓冲区,直到在输出端达到流结束为止。 在执行状态下,您可以使用flush()随时返回到“刷新”子状态。)

Call stop() to return the codec to the Uninitialized state, whereupon it may be configured again. When you are done using a codec, you must release it by calling release().(调用stop()将编解码器返回到未初始化状态,这时可以再次对其进行配置。 使用编解码器完成操作后,必须通过调用release()释放它。)

On rare occasions the codec may encounter an error and move to the Error state. This is communicated using an invalid return value from a queuing operation, or sometimes via an exception. Call reset() to make the codec usable again. You can call it from any state to move the codec back to the Uninitialized state. Otherwise, call release() to move to the terminal Released state.(在极少数情况下,编解码器可能会遇到错误并进入“错误”状态。 使用队列操作中的无效返回值或有时通过异常来传达此信息。 调用reset()使编解码器再次可用。 您可以从任何状态调用它,以将编解码器移回“未初始化”状态。 否则,请调用release()以进入终端“释放”状态。)

Creation


Use MediaCodecList to create a MediaCodec for a specific MediaFormat. When decoding a file or a stream, you can get the desired format from MediaExtractor#getTrackFormat. Inject any specific features that you want to add using MediaFormat#setFeatureEnabled, then call MediaCodecList#findDecoderForFormat to get the name of a codec that can handle that specific media format. Finally, create the codec using createByCodecName(String).(使用MediaCodecList为特定的MediaFormat创建MediaCodec。 解码文件或流时,可以从MediaExtractor#getTrackFormat获得所需的格式。 使用MediaFormat#setFeatureEnabled注入要添加的所有特定功能,然后调用MediaCodecList#findDecoderForFormat以获取可以处理该特定媒体格式的编解码器的名称。 最后,使用createByCodecName(String)创建编解码器。)

Note: On Build.VERSION_CODES.LOLLIPOP, the format to MediaCodecList.findDecoder/EncoderForFormat must not contain a MediaFormat#KEY_FRAME_RATE. Use format.setString(MediaFormat.KEY_FRAME_RATE, null) to clear any existing frame rate setting in the format.(注意:在Build.VERSION_CODES.LOLLIPOP上,MediaCodecList.findDecoder / EncoderForFormat的格式不得包含MediaFormat#KEY_FRAME_RATE。 使用format.setString(MediaFormat.KEY_FRAME_RATE,null)清除格式中的任何现有帧速率设置。)

You can also create the preferred codec for a specific MIME type using createDecoder/EncoderByType(java.lang.String). This, however, cannot be used to inject features, and may create a codec that cannot handle the specific desired media format.(您还可以使用createDecoder / EncoderByType(java.lang.String)为特定的MIME类型创建首选编解码器。 但是,这不能用于注入特征,并且可能创建无法处理特定所需媒体格式的编解码器。)

 

Creating secure decoders


On versions Build.VERSION_CODES.KITKAT_WATCH and earlier, secure codecs might not be listed in MediaCodecList, but may still be available on the system. Secure codecs that exist can be instantiated by name only, by appending ".secure" to the name of a regular codec (the name of all secure codecs must end in ".secure".) createByCodecName(String) will throw an IOException if the codec is not present on the system.

From Build.VERSION_CODES.LOLLIPOP onwards, you should use the CodecCapabilities#FEATURE_SecurePlayback feature in the media format to create a secure decoder.(在Build.VERSION_CODES.KITKAT_WATCH和更低版本上,安全编解码器可能没有在MediaCodecList中列出,但在系统上仍然可用。 存在的安全编解码器只能通过名称实例化,方法是在常规编解码器的名称后附加“ .secure”(所有安全编解码器的名称都必须以“ .secure”结尾。)createByCodecName(String)将抛出IOException。 编解码器不存在于系统上。

从Build.VERSION_CODES.LOLLIPOP开始,您应该使用媒体格式的CodecCapabilities#FEATURE_SecurePlayback功能来创建安全的解码器)

 

Initialization


After creating the codec, you can set a callback using setCallback if you want to process data asynchronously. Then, configure the codec using the specific media format. This is when you can specify the output Surface for video producers – codecs that generate raw video data (e.g. video decoders). This is also when you can set the decryption parameters for secure codecs (see MediaCrypto). Finally, since some codecs can operate in multiple modes, you must specify whether you want it to work as a decoder or an encoder.(创建编解码器后,如果要异步处理数据,则可以使用setCallback设置回调。 然后,使用特定的媒体格式配置编解码器。 这是您可以为视频制作者指定输出Surface的时候-生成原始视频数据的编解码器(例如,视频解码器)。 这也是您可以设置安全编解码器的解密参数的时候(请参阅MediaCrypto)。 最后,由于某些编解码器可以在多种模式下运行,因此必须指定是希望其用作解码器还是编码器。)

 

Since Build.VERSION_CODES.LOLLIPOP, you can query the resulting input and output format in the Configured state. You can use this to verify the resulting configuration, e.g. color formats, before starting the codec.(从Build.VERSION_CODES.LOLLIPOP开始,您可以在Configured状态下查询生成的输入和输出格式。 您可以使用它来验证最终的配置,例如 颜色格式,然后再启动编解码器)

 

If you want to process raw input video buffers natively with a video consumer – a codec that processes raw video input, such as a video encoder – create a destination Surface for your input data using createInputSurface() after configuration. Alternately, set up the codec to use a previously created persistent input surface by calling setInputSurface(Surface).(如果要使用视频使用者(处理视频输入的编解码器,例如视频编码器)本地处理视频输入的原始输入视频缓冲区,请在配置后使用createInputSurface()为输入数据创建目标Surface。 或者,通过调用setInputSurface(Surface)将编解码器设置为使用以前创建的持久输入surface)

 

Codec-specific Data


Some formats, notably AAC audio and MPEG4, H.264 and H.265 video formats require the actual data to be prefixed by a number of buffers containing setup data, or codec specific data. When processing such compressed formats, this data must be submitted to the codec after start() and before any frame data. Such data must be marked using the flag BUFFER_FLAG_CODEC_CONFIG in a call to queueInputBuffer.(某些格式,尤其是AAC音频和MPEG4,H.264和H.265视频格式,要求实际数据的前缀是许多包含设置数据或编解码器特定数据的缓冲区。 处理此类压缩格式时,必须在start()之后和任何帧数据之前将这些数据提交给编解码器。 此类数据必须在对queueInputBuffer的调用中使用标志BUFFER_FLAG_CODEC_CONFIG进行标记)

 

Codec-specific data can also be included in the format passed to configure in ByteBuffer entries with keys "csd-0", "csd-1", etc. These keys are always included in the track MediaFormat obtained from the MediaExtractor#getTrackFormat. Codec-specific data in the format is automatically submitted to the codec upon start(); you MUST NOT submit this data explicitly. If the format did not contain codec specific data, you can choose to submit it using the specified number of buffers in the correct order, according to the format requirements. In case of H.264 AVC, you can also concatenate all codec-specific data and submit it as a single codec-config buffer.(特定于编解码器的数据也可以包含在传递给ByteBuffer条目进行配置的格式中,其中包含键“ csd-0”,“ csd-1”等。这些键始终包含在从MediaExtractor#getTrackFormat获得的轨道MediaFormat中。 格式特定的编解码器数据在start()时自动提交给编解码器; 您不得明确提交此数据。 如果格式不包含编解码器特定的数据,则可以根据格式要求选择使用正确数量的指定缓冲区使用指定数量提交。 对于H.264 AVC,您还可以连接所有特定于编解码器的数据,并将其作为单个编解码器配置缓冲区提交。)

Android uses the following codec-specific data buffers. These are also required to be set in the track format for proper MediaMuxer track configuration. Each parameter set and the codec-specific-data sections marked with (*) must start with a start code of "\x00\x00\x00\x01".(Android使用以下特定于编解码器的数据缓冲区。 为了正确配置MediaMuxer轨道,还需要将其设置为轨道格式。 每个参数集和标有(*)的编解码器专用数据部分都必须以“ \ x00 \ x00 \ x00 \ x01”的起始代码开头)

 

 Note: care must be taken if the codec is flushed immediately or shortly after start, before any output buffer or output format change has been returned, as the codec specific data may be lost during the flush. You must resubmit the data using buffers marked with BUFFER_FLAG_CODEC_CONFIG after such flush to ensure proper codec operation(注意:如果在返回任何输出缓冲区或输出格式更改之前立即或在启动后不久刷新编解码器,则必须格外小心,因为编解码器特定的数据可能会在刷新过程中丢失。 刷新后,必须使用标有BUFFER_FLAG_CODEC_CONFIG的缓冲区重新提交数据,以确保正确的编解码器操作)

Encoders (or codecs that generate compressed data) will create and return the codec specific data before any valid output buffer in output buffers marked with the codec-config flag. Buffers containing codec-specific-data have no meaningful timestamps.(编码器(或生成压缩数据的编解码器)将在标有codec-config标志的输出缓冲区中的任何有效输出缓冲区之前,创建并返回特定于编解码器的数据。 包含编解码器特定数据的缓冲区没有有意义的时间戳)

 

Data Processing


Each codec maintains a set of input and output buffers that are referred to by a buffer-ID in API calls. After a successful call to start() the client "owns" neither input nor output buffers. In synchronous mode, call dequeueInput/OutputBuffer(…) to obtain (get ownership of) an input or output buffer from the codec. In asynchronous mode, you will automatically receive available buffers via the Callback#onInputBufferAvailable/Callback#onOutputBufferAvailable callbacks(每个编解码器维护一组输入和输出缓冲区,这些输入和输出缓冲区由API调用中的缓冲区ID引用。 成功调用start()后,客户端“不拥有”输入缓冲区或输出缓冲区。 在同步模式下,调用dequeueInput / OutputBuffer(…)从编解码器获取(或拥有)输入或输出缓冲区。 在异步模式下,您将通过Callback#onInputBufferAvailable / Callback#onOutputBufferAvailable回调自动接收可用缓冲区。)

 

Upon obtaining an input buffer, fill it with data and submit it to the codec using queueInputBuffer – or queueSecureInputBuffer if using decryption. Do not submit multiple input buffers with the same timestamp (unless it is codec-specific data marked as such)(获取输入缓冲区后,将其填充数据,然后使用queueInputBuffer –如果使用解密,则将其提交给编解码器。 不要提交带有相同时间戳的多个输入缓冲区(除非它是特定于编解码器的数据标记为这样)

The codec in turn will return a read-only output buffer via the Callback#onOutputBufferAvailable callback in asynchronous mode, or in response to a dequeueOutputBuffer call in synchronous mode. After the output buffer has been processed, call one of the releaseOutputBuffer methods to return the buffer to the codec(反过来,编解码器将通过Callback#onOutputBufferAvailable回调返回异步输出的只读输出缓冲区,或者在同步模式下响应dequeueOutputBuffer调用返回一个只读输出缓冲区。 处理完输出缓冲区后,调用releaseOutputBuffer方法之一将缓冲区返回到编解码器)

 

While you are not required to resubmit/release buffers immediately to the codec, holding onto input and/or output buffers may stall the codec, and this behavior is device dependent. Specifically, it is possible that a codec may hold off on generating output buffers until all outstanding buffers have been released/resubmitted. Therefore, try to hold onto to available buffers as little as possible(尽管不需要立即将缓冲区重新提交/释放到编解码器,但保持输入和/或输出缓冲区可能会使编解码器停顿,并且此行为与设备有关。 具体来说,编解码器可能会推迟生成输出缓冲区,直到所有未完成的缓冲区都已释放/重新提交为止。 因此,请尝试尽可能少地保留可用缓冲区)

 Depending on the API version, you can process data in three ways:

 

 Asynchronous Processing using Buffers


Since Build.VERSION_CODES.LOLLIPOP, the preferred method is to process data asynchronously by setting a callback before calling configure. Asynchronous mode changes the state transitions slightly, because you must call start() after flush() to transition the codec to the Running sub-state and start receiving input buffers. Similarly, upon an initial call to start the codec will move directly to the Running sub-state and start passing available input buffers via the callback(从Build.VERSION_CODES.LOLLIPOP开始,首选方法是在调用configure之前通过设置回调来异步处理数据。 异步模式会稍微更改状态转换,因为必须在flush()之后调用start()才能将编解码器转换为Running子状态并开始接收输入缓冲区。 同样,在首次启动编解码器时,将直接移至“运行”子状态,并开始通过回调传递可用的输入缓冲区)

MediaCodec is typically used like this in asynchronous mode:

 MediaCodec codec = MediaCodec.createByCodecName(name);
 MediaFormat mOutputFormat; // member variable
 codec.setCallback(new MediaCodec.Callback() {
  @Override
  void onInputBufferAvailable(MediaCodec mc, int inputBufferId) {
    ByteBuffer inputBuffer = codec.getInputBuffer(inputBufferId);
    // fill inputBuffer with valid data
    …
    codec.queueInputBuffer(inputBufferId, …);
  }
 
  @Override
  void onOutputBufferAvailable(MediaCodec mc, int outputBufferId, …) {
    ByteBuffer outputBuffer = codec.getOutputBuffer(outputBufferId);
    MediaFormat bufferFormat = codec.getOutputFormat(outputBufferId); // option A
    // bufferFormat is equivalent to mOutputFormat
    // outputBuffer is ready to be processed or rendered.
    …
    codec.releaseOutputBuffer(outputBufferId, …);
  }
 
  @Override
  void onOutputFormatChanged(MediaCodec mc, MediaFormat format) {
    // Subsequent data will conform to new format.
    // Can ignore if using getOutputFormat(outputBufferId)
    mOutputFormat = format; // option B
  }
 
  @Override
  void onError(…) {
    …
  }
 });
 codec.configure(format, …);
 mOutputFormat = codec.getOutputFormat(); // option B
 codec.start();
 // wait for processing to complete
 codec.stop();
 codec.release();

Synchronous Processing using Buffers


Since Build.VERSION_CODES.LOLLIPOP, you should retrieve input and output buffers using getInput/OutputBuffer(int) and/or getInput/OutputImage(int) even when using the codec in synchronous mode. This allows certain optimizations by the framework, e.g. when processing dynamic content. This optimization is disabled if you call getInput/OutputBuffers()从Build.VERSION_CODES.LOLLIPOP开始,即使在同步模式下使用编解码器,也应使用getInput / OutputBuffer(int)和/或getInput / OutputImage(int)检索输入和输出缓冲区。 这允许框架进行某些优化,例如 处理动态内容时。 如果调用getInput / OutputBuffers(),则会禁用此优化

Note: do not mix the methods of using buffers and buffer arrays at the same time. Specifically, only call getInput/OutputBuffers directly after start() or after having dequeued an output buffer ID with the value of INFO_OUTPUT_FORMAT_CHANGED(注意:请勿同时使用缓冲区和缓冲区数组的方法。 具体来说,仅在start()之后或将具有INFO_OUTPUT_FORMAT_CHANGED值的输出缓冲区ID出队后,才直接调用getInput / OutputBuffers)

MediaCodec is typically used like this in synchronous mode:

 MediaCodec codec = MediaCodec.createByCodecName(name);
 codec.configure(format, …);
 MediaFormat outputFormat = codec.getOutputFormat(); // option B
 codec.start();
 for (;;) {
  int inputBufferId = codec.dequeueInputBuffer(timeoutUs);
  if (inputBufferId >= 0) {
    ByteBuffer inputBuffer = codec.getInputBuffer(…);
    // fill inputBuffer with valid data
    …
    codec.queueInputBuffer(inputBufferId, …);
  }
  int outputBufferId = codec.dequeueOutputBuffer(…);
  if (outputBufferId >= 0) {
    ByteBuffer outputBuffer = codec.getOutputBuffer(outputBufferId);
    MediaFormat bufferFormat = codec.getOutputFormat(outputBufferId); // option A
    // bufferFormat is identical to outputFormat
    // outputBuffer is ready to be processed or rendered.
    …
    codec.releaseOutputBuffer(outputBufferId, …);
  } else if (outputBufferId == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
    // Subsequent data will conform to new format.
    // Can ignore if using getOutputFormat(outputBufferId)
    outputFormat = codec.getOutputFormat(); // option B
  }
 }
 codec.stop();
 codec.release();

Synchronous Processing using Buffer Arrays (deprecated)


In versions Build.VERSION_CODES.KITKAT_WATCH and before, the set of input and output buffers are represented by the ByteBuffer[] arrays. After a successful call to start(), retrieve the buffer arrays using getInput/OutputBuffers(). Use the buffer ID-s as indices into these arrays (when non-negative), as demonstrated in the sample below. Note that there is no inherent correlation between the size of the arrays and the number of input and output buffers used by the system, although the array size provides an upper bound(在Build.VERSION_CODES.KITKAT_WATCH及更低版本中,输入和输出缓冲区的集合由ByteBuffer []数组表示。 成功调用start()后,使用getInput / OutputBuffers()检索缓冲区数组。 使用缓冲区ID作为这些数组的索引(非负数时),如以下示例所示。 请注意,尽管数组大小提供了上限,但数组大小与系统使用的输入和输出缓冲区的数量之间没有固有的相关性)

 MediaCodec codec = MediaCodec.createByCodecName(name);
 codec.configure(format, …);
 codec.start();
 ByteBuffer[] inputBuffers = codec.getInputBuffers();
 ByteBuffer[] outputBuffers = codec.getOutputBuffers();
 for (;;) {
  int inputBufferId = codec.dequeueInputBuffer(…);
  if (inputBufferId >= 0) {
    // fill inputBuffers[inputBufferId] with valid data
    …
    codec.queueInputBuffer(inputBufferId, …);
  }
  int outputBufferId = codec.dequeueOutputBuffer(…);
  if (outputBufferId >= 0) {
    // outputBuffers[outputBufferId] is ready to be processed or rendered.
    …
    codec.releaseOutputBuffer(outputBufferId, …);
  } else if (outputBufferId == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
    outputBuffers = codec.getOutputBuffers();
  } else if (outputBufferId == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
    // Subsequent data will conform to new format.
    MediaFormat format = codec.getOutputFormat();
  }
 }
 codec.stop();
 codec.release();

End-of-stream Handling


When you reach the end of the input data, you must signal it to the codec by specifying the BUFFER_FLAG_END_OF_STREAM flag in the call to queueInputBuffer. You can do this on the last valid input buffer, or by submitting an additional empty input buffer with the end-of-stream flag set. If using an empty buffer, the timestamp will be ignored.

The codec will continue to return output buffers until it eventually signals the end of the output stream by specifying the same end-of-stream flag in the BufferInfo set in dequeueOutputBuffer or returned via Callback#onOutputBufferAvailable. This can be set on the last valid output buffer, or on an empty buffer after the last valid output buffer. The timestamp of such empty buffer should be ignored.

Do not submit additional input buffers after signaling the end of the input stream, unless the codec has been flushed, or stopped and restarted.

到达输入数据的末尾时,必须在对queueInputBuffer的调用中指定BUFFER_FLAG_END_OF_STREAM标志,以将其发送给编解码器。 您可以在最后一个有效的输入缓冲区上执行此操作,也可以通过提交附加的空输入缓冲区(设置了流结束标志)来执行此操作。 如果使用空缓冲区,则时间戳将被忽略。

编解码器将继续返回输出缓冲区,直到最终通过在dequeueOutputBuffer或通过Callback#onOutputBufferAvailable返回的BufferInfo中指定相同的流结束标志来最终指示输出流的结束为止。 这可以在最后一个有效的输出缓冲区上设置,也可以在最后一个有效的输出缓冲区后的空白缓冲区上设置。 这种空缓冲区的时间戳应该被忽略。

除非已刷新,停止或重新启动编解码器,否则请在发出输入流结束信号后不要提交其他输入缓冲区。

 

Using an Output Surface


The data processing is nearly identical to the ByteBuffer mode when using an output Surface; however, the output buffers will not be accessible, and are represented as null values. E.g. getOutputBuffer/Image(int) will return null and getOutputBuffers() will return an array containing only null-s.

使用输出Surface时,数据处理几乎与ByteBuffer模式相同; 但是,输出缓冲区将不可访问,并表示为空值。 例如。 getOutputBuffer / Image(int)将返回null,而getOutputBuffers()将返回仅包含null-s的数组

 

When using an output Surface, you can select whether or not to render each output buffer on the surface. You have three choices:

使用输出Surface时,可以选择是否在Surface上渲染每个输出缓冲区。 您有三种选择:

1. 不渲染缓冲区:调用releaseOutputBuffer(bufferId,false)。
2. 使用默认时间戳渲染缓冲区:调用releaseOutputBuffer(bufferId,true)。
3. 使用特定的时间戳渲染缓冲区:调用releaseOutputBuffer(bufferId,timestamp)。

 

Since Build.VERSION_CODES.M, the default timestamp is the BufferInfo#presentationTimeUs of the buffer (converted to nanoseconds). It was not defined prior to that.

从Build.VERSION_CODES.M开始,默认时间戳为缓冲区的BufferInfo#presentationTimeUs(转换为纳秒)。 在此之前未定义。

 

Also since Build.VERSION_CODES.M, you can change the output Surface dynamically using setOutputSurface.

同样从Build.VERSION_CODES.M开始,您可以使用setOutputSurface动态更改输出Surface。

 

When rendering output to a Surface, the Surface may be configured to drop excessive frames (that are not consumed by the Surface in a timely manner). Or it may be configured to not drop excessive frames. In the latter mode if the Surface is not consuming output frames fast enough, it will eventually block the decoder. Prior to Build.VERSION_CODES.Q the exact behavior was undefined, with the exception that View surfaces (SurfaceView or TextureView) always dropped excessive frames. Since Build.VERSION_CODES.Q the default behavior is to drop excessive frames. Applications can opt out of this behavior for non-View surfaces (such as ImageReader or SurfaceTexture) by targeting SDK Build.VERSION_CODES.Q and setting the key "allow-frame-drop" to 0 in their configure format.

 

当将输出渲染到Surface时,Surface可配置为丢弃过多的帧(Surface不会及时消耗掉这些帧)。 或者可以将其配置为不丢失过多的帧。 在后一种模式下,如果Surface无法足够快地消耗输出帧,则它将最终阻塞解码器。 在Build.VERSION_CODES.Q之前,确切的行为是不确定的,除了View曲面(SurfaceView或TextureView)始终掉落过多的帧。 从Build.VERSION_CODES.Q开始,默认行为是丢弃过多帧。 应用程序可以通过将SDK Build.VERSION_CODES.Q定位为目标并将其“ allow-frame-drop”键设置为0的配置格式,从而针对非View曲面(例如ImageReader或SurfaceTexture)选择退出此行为。

 

 

 

 

 

 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
MediaCodec是Android平台上的一个多媒体编解码器类,它提供了硬件加速的音视频编解码功能。通过使用MediaCodec,开发者可以在Android设备上高效地进行音视频编解码操作。 MediaCodec可以用于解码和编码各种音视频格式,包括但不限于H.264、H.265、AAC、MP3等。它可以直接与底层硬件交互,利用硬件加速来提高音视频处理的性能和效率。 使用MediaCodec进行音视频编解码的基本流程如下: 1. 创建MediaCodec对象:通过调用createDecoderByType()或createEncoderByType()方法创建一个指定类型的解码器或编码器。 2. 配置MediaCodec:设置解码器或编码器的参数,如输入数据格式、输出数据格式、码率等。 3. 启动MediaCodec:调用start()方法启动解码器或编码器。 4. 处理输入数据:将待解码或待编码的数据传递给MediaCodec进行处理,可以通过configure()方法设置输入缓冲区和输出缓冲区。 5. 处理输出数据:从MediaCodec获取解码或编码后的数据,可以通过dequeueInputBuffer()和dequeueOutputBuffer()方法获取输入缓冲区和输出缓冲区的索引,然后通过getInputBuffer()和getOutputBuffer()方法获取具体的输入数据和输出数据。 6. 释放资源:完成音视频编解码后,调用stop()和release()方法释放MediaCodec对象。 MediaCodec的使用可以实现高效的音视频处理,尤其在需要处理大量音视频数据的场景下,能够提供更好的性能和用户体验。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值