ExoPlayer播放器剖析(五)ExoPlayer对AudioTrack的操作

关联博客

ExoPlayer播放器剖析(一)进入ExoPlayer的世界
ExoPlayer播放器剖析(二)编写exoplayer的demo
ExoPlayer播放器剖析(三)流程分析—从build到prepare看ExoPlayer的创建流程
ExoPlayer播放器剖析(四)从renderer.render函数分析至MediaCodec
ExoPlayer播放器剖析(五)ExoPlayer对AudioTrack的操作
ExoPlayer播放器剖析(六)ExoPlayer同步机制分析
ExoPlayer播放器剖析(七)ExoPlayer对音频时间戳的处理
ExoPlayer播放器扩展(一)DASH流与HLS流简介

一、引言:
上一篇博客中,分析了doSomeWork中的renderer.render接口和startRenderers方法,其中,在介绍后者的时候有提过对音频的操作实际上调用了AudioTrack的play()方法进行播放,这篇博客着重介绍下ExoPlayer如何创建AudioTrack并往audiotrack中写数据的。

二、drainOutputBuffer函数分析:
对audiotrack的操作是在renderer.render中完成的,回到render方法, 我们曾贴出过下面的代码片段:

  @Override
  public void render(long positionUs, long elapsedRealtimeUs) throws ExoPlaybackException {
    /* 是否处理EOS */
    if (pendingOutputEndOfStream) {
      pendingOutputEndOfStream = false;
      processEndOfStream();
    }
    ...
    // We have a format.
    /* 配置codec */
    maybeInitCodecOrBypass();
	...
    TraceUtil.beginSection("drainAndFeed");
    /* 消耗解码后的数据 */
    while (drainOutputBuffer(positionUs, elapsedRealtimeUs)
        && shouldContinueRendering(renderStartTimeMs)) {}
    /* 填充源数据 */
    while (feedInputBuffer() && shouldContinueRendering(renderStartTimeMs)) {}
    TraceUtil.endSection();
		...
  }

接下来,看下drainOutputBuffer函数:

  private boolean drainOutputBuffer(long positionUs, long elapsedRealtimeUs)
      throws ExoPlaybackException {
    /* 判断是否有可用的输出buffer */
    if (!hasOutputBuffer()) {
      int outputIndex;
      if (codecNeedsEosOutputExceptionWorkaround && codecReceivedEos) {
        try {
          outputIndex = codecAdapter.dequeueOutputBufferIndex(outputBufferInfo);
        } catch (IllegalStateException e) {
          processEndOfStream();
          if (outputStreamEnded) {
            // Release the codec, as it's in an error state.
            releaseCodec();
          }
          return false;
        }
      } else {
        /* 从decoder出列一个可用的buffer index */
        outputIndex = codecAdapter.dequeueOutputBufferIndex(outputBufferInfo);
      }

      if (outputIndex < 0) {
        if (outputIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED /* (-2) */) {
          processOutputMediaFormatChanged();
          return true;
        } else if (outputIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED /* (-3) */) {
          processOutputBuffersChanged();
          return true;
        }
        /* MediaCodec.INFO_TRY_AGAIN_LATER (-1) or unknown negative return value */
        if (codecNeedsEosPropagation
            && (inputStreamEnded || codecDrainState == DRAIN_STATE_WAIT_END_OF_STREAM)) {
          processEndOfStream();
        }
        return false;
      }

      // We've dequeued a buffer.
      if (shouldSkipAdaptationWorkaroundOutputBuffer) {
        shouldSkipAdaptationWorkaroundOutputBuffer = false;
        codec.releaseOutputBuffer(outputIndex, false);
        return true;
      } else if (outputBufferInfo.size == 0
          && (outputBufferInfo.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
        // The dequeued buffer indicates the end of the stream. Process it immediately.
        processEndOfStream();
        return false;
      }
      /* 根据index找到对应的输出buffer */
      this.outputIndex = outputIndex;
      outputBuffer = getOutputBuffer(outputIndex);

      // The dequeued buffer is a media buffer. Do some initial setup.
      // It will be processed by calling processOutputBuffer (possibly multiple times).
      if (outputBuffer != null) {
        outputBuffer.position(outputBufferInfo.offset);
        outputBuffer.limit(outputBufferInfo.offset + outputBufferInfo.size);
      }
      isDecodeOnlyOutputBuffer = isDecodeOnlyBuffer(outputBufferInfo.presentationTimeUs);
      isLastOutputBuffer =
          lastBufferInStreamPresentationTimeUs == outputBufferInfo.presentationTimeUs;
      updateOutputFormatForTime(outputBufferInfo.presentationTimeUs);
    }

    boolean processedOutputBuffer;
    if (codecNeedsEosOutputExceptionWorkaround && codecReceivedEos) {
      try {
        processedOutputBuffer =
            processOutputBuffer(
                positionUs,
                elapsedRealtimeUs,
                codec,
                outputBuffer,
                outputIndex,
                outputBufferInfo.flags,
                /* sampleCount= */ 1,
                outputBufferInfo.presentationTimeUs,
                isDecodeOnlyOutputBuffer,
                isLastOutputBuffer,
                outputFormat);
      } catch (IllegalStateException e) {
        processEndOfStream();
        if (outputStreamEnded) {
          // Release the codec, as it's in an error state.
          releaseCodec();
        }
        return false;
      }
    } else {
      /* 处理输出buffer */
      processedOutputBuffer =
          processOutputBuffer(
              positionUs,
              elapsedRealtimeUs,
              codec,
              outputBuffer,
              outputIndex,
              outputBufferInfo.flags,
              /* sampleCount= */ 1,
              outputBufferInfo.presentationTimeUs,
              isDecodeOnlyOutputBuffer,
              isLastOutputBuffer,
              outputFormat);
    }

    /* 处理输出buffer结果为true,表示该outputbuffer已经被消耗完,若为false,表示该输出buffer还有残留数据 */
    if (processedOutputBuffer) {
      onProcessedOutputBuffer(outputBufferInfo.presentationTimeUs);
      boolean isEndOfStream = (outputBufferInfo.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0;
      /* 将outputBuffer置为空,且buffer的index置为-1,以便下次进入该函数去获取新的输出buffer */
      resetOutputBuffer();
      /* 判断是否需要处理EOS */
      if (!isEndOfStream) {
        return true;
      }
      processEndOfStream();
    }

    return false;
  }

整个函数看起来很长,实际上做的主要事情就两件,先确认是否有可用的输出buffer,如果没有,则通过调用MediaCodec的api拿到可用buffer的index,然后通过index找到对应的buffer,在拿到可用的buffer之后,调用processOutputBuffer去处理输出buffer。先看下确定是否有可用buffer的判断条件:

  private boolean hasOutputBuffer() {
    return outputIndex >= 0;
  }

因为outputIndex是通过调用MediaCodec.dequeueOutputBufferIndex()接口获得的,如果该值大于等于0,表明现在decoder里面已经解码出来了待播放的数据。接下来就会调用processOutputBuffer函数进行输出buffer的处理了,先说下对于处理结果的分析(重要),exoplayer使用了一个bool变量processedOutputBuffer来记录最终的处理结果,如果该值为true,代表该buffer中的所有数据全部写入到了audiotrack当中,那么就会对输出buffer做一个reset操作(将MediaCodecRender维护的outputBuffer置为null,同时将outputIndex置为-1),这样做的目的是下一次进入该函数时会重新发起MediaCodec.dequeueOutputBufferIndex()的操作去找一个新的可用buffer来继续往audiotrack里面送了,那要是processedOutputBuffer的处理结果是false呢?看下最开头的if就知道,下一次进来将不会去查询新的可用buffer,而是直接将本次buffer中的剩余数据继续送到audiotrack。那么这里就还剩最后一个问题,drainOutputBuffer的这个返回值对外层函数的影响到底是什么?我们看下renderer.render函数中调用处:

	...
    /* 消耗解码数据 */
    while (drainOutputBuffer(positionUs, elapsedRealtimeUs)
       && shouldContinueRendering(renderStartTimeMs)) {}
   ...

while循环判断该函数的返回值,如果为true的话,则持续调用,为false就跳出while循环进行后面的操作。很多人不理解为什么这里要这么写,我先抛出一个结论:
drainOutputBuffer返回结果为true代表audiotrack中可写入空间足够(消费者大于生产者),返回结果为false代表audiotrack中的数据空间不足以写入所有的buffer数据(生产者大于消费者),这样解释的话你大概就想得通了,对于MediaCodecRenderer而言,如果调用drainOutputBuffer返回的结果为true,则代表此时的audiotrack对数据的需求量很大,可以持续的往里面送数据,若结果为false,则表示audiotrack里面太多数据没有被消耗,暂时不需要往里面送数据。
把最外面的生产者-消费者关系理清楚之后,我们再回到里面去分析processOutputBuffer函数,进入到MediaCodecAudioRenderer的processOutputBuffer函数:

  @Override
  protected boolean processOutputBuffer(
      long positionUs,
      long elapsedRealtimeUs,
      @Nullable MediaCodec codec,
      @Nullable ByteBuffer buffer,
      int bufferIndex,
      int bufferFlags,
      int sampleCount,
      long bufferPresentationTimeUs,
      boolean isDecodeOnlyBuffer,
      boolean isLastBuffer,
      Format format)
      throws ExoPlaybackException {
    checkNotNull(buffer);
    if (codec != null
        && codecNeedsEosBufferTimestampWorkaround
        && bufferPresentationTimeUs == 0
        && (bufferFlags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0
        && getLargestQueuedPresentationTimeUs() != C.TIME_UNSET) {
      bufferPresentationTimeUs = getLargestQueuedPresentationTimeUs();
    }

    if (decryptOnlyCodecFormat != null
        && (bufferFlags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0) {
      // Discard output buffers from the passthrough (raw) decoder containing codec specific data.
      checkNotNull(codec).releaseOutputBuffer(bufferIndex, false);
      return true;
    }

    if (isDecodeOnlyBuffer) {
      if (codec != null) {
        codec.releaseOutputBuffer(bufferIndex, false);
      }
      decoderCounters.skippedOutputBufferCount += sampleCount;
      audioSink.handleDiscontinuity();
      return true;
    }

    boolean fullyConsumed;
    try {
      /* 1.通过audiotrack消耗输出buffer,标志位fullyConsumed代表是否完全消耗 */
      fullyConsumed = audioSink.handleBuffer(buffer, bufferPresentationTimeUs, sampleCount);
    } catch (AudioSink.InitializationException | AudioSink.WriteException e) {
      throw createRendererException(e, format);
    }

    if (fullyConsumed) {
      if (codec != null) {
        /* 2.如果完全消耗,则release对应的buffer */
        codec.releaseOutputBuffer(bufferIndex, false);
      }
      decoderCounters.renderedOutputBufferCount += sampleCount;
      return true;
    }

    return false;
  }

前面的判断条件我们不需要管,整个函数实际上只完成一件事,就是调用audiosink来处理buffer中的数据,并用了fullyConsumed这个bool变量来记录处理结果,变量的名字也很好理解,代表是否完全消耗完数据,如果返回为true的话,则调用MediaCode.releaseOutputBuffer()方法来释放这个buffer,我们跟进到
audioSink.handleBuffer(),在前面的博客中,我们分析了,audiosink的实现类是DefaultAudioSink,看一下对handleBuffer的函数实现:

  @Override
  @SuppressWarnings("ReferenceEquality")
  public boolean handleBuffer(
      ByteBuffer buffer, long presentationTimeUs, int encodedAccessUnitCount)
      throws InitializationException, WriteException {
    Assertions.checkArgument(inputBuffer == null || buffer == inputBuffer);

    if (pendingConfiguration != null) {
      if (!drainToEndOfStream()) {
        // There's still pending data in audio processors to write to the track.
        return false;
      } else if (!pendingConfiguration.canReuseAudioTrack(configuration)) {
        playPendingData();
        if (hasPendingData()) {
          // We're waiting for playout on the current audio track to finish.
          return false;
        }
        flush();
      } else {
        // The current audio track can be reused for the new configuration.
        configuration = pendingConfiguration;
        pendingConfiguration = null;
        if (isOffloadedPlayback(audioTrack)) {
          audioTrack.setOffloadEndOfStream();
          audioTrack.setOffloadDelayPadding(
              configuration.inputFormat.encoderDelay, configuration.inputFormat.encoderPadding);
          isWaitingForOffloadEndOfStreamHandled = true;
        }
      }
      // Re-apply playback parameters.
      applyAudioProcessorPlaybackParametersAndSkipSilence(presentationTimeUs);
    }

    /* 初始化AudioTrack */
    if (!isAudioTrackInitialized()) {
      initializeAudioTrack();
    }

    /* 设置audiotrack参数:倍速设置,倍频等 */
    if (startMediaTimeUsNeedsInit) {
      startMediaTimeUs = max(0, presentationTimeUs);
      startMediaTimeUsNeedsSync = false;
      startMediaTimeUsNeedsInit = false;

      if (enableAudioTrackPlaybackParams && Util.SDK_INT >= 23) {
        setAudioTrackPlaybackParametersV23(audioTrackPlaybackParameters);
      }
      applyAudioProcessorPlaybackParametersAndSkipSilence(presentationTimeUs);
      /* 是否播放数据 */
      if (playing) {
        play();
      }
    }

    if (!audioTrackPositionTracker.mayHandleBuffer(getWrittenFrames())) {
      return false;
    }
    /* 注意这个inputbuffer是DefaultAudioSink类维护的,如果不为null表示decoder给过来的buffer还没有被消耗完 */
    if (inputBuffer == null) {
      // We are seeing this buffer for the first time.
      Assertions.checkArgument(buffer.order() == ByteOrder.LITTLE_ENDIAN);
      if (!buffer.hasRemaining()) {
        // The buffer is empty.
        return true;
      }

      if (configuration.outputMode != OUTPUT_MODE_PCM && framesPerEncodedSample == 0) {
        // If this is the first encoded sample, calculate the sample size in frames.
        framesPerEncodedSample = getFramesPerEncodedSample(configuration.outputEncoding, buffer);
        if (framesPerEncodedSample == 0) {
          // We still don't know the number of frames per sample, so drop the buffer.
          // For TrueHD this can occur after some seek operations, as not every sample starts with
          // a syncframe header. If we chunked samples together so the extracted samples always
          // started with a syncframe header, the chunks would be too large.
          return true;
        }
      }

      if (afterDrainParameters != null) {
        if (!drainToEndOfStream()) {
          // Don't process any more input until draining completes.
          return false;
        }
        applyAudioProcessorPlaybackParametersAndSkipSilence(presentationTimeUs);
        afterDrainParameters = null;
      }

      // Check that presentationTimeUs is consistent with the expected value.
      long expectedPresentationTimeUs =
          startMediaTimeUs
              + configuration.inputFramesToDurationUs(
                  getSubmittedFrames() - trimmingAudioProcessor.getTrimmedFrameCount());
      if (!startMediaTimeUsNeedsSync
          && Math.abs(expectedPresentationTimeUs - presentationTimeUs) > 200000) {
        Log.e(
            TAG,
            "Discontinuity detected [expected "
                + expectedPresentationTimeUs
                + ", got "
                + presentationTimeUs
                + "]");
        startMediaTimeUsNeedsSync = true;
      }
      if (startMediaTimeUsNeedsSync) {
        if (!drainToEndOfStream()) {
          // Don't update timing until pending AudioProcessor buffers are completely drained.
          return false;
        }
        // Adjust startMediaTimeUs to be consistent with the current buffer's start time and the
        // number of bytes submitted.
        long adjustmentUs = presentationTimeUs - expectedPresentationTimeUs;
        startMediaTimeUs += adjustmentUs;
        startMediaTimeUsNeedsSync = false;
        // Re-apply playback parameters because the startMediaTimeUs changed.
        applyAudioProcessorPlaybackParametersAndSkipSilence(presentationTimeUs);
        if (listener != null && adjustmentUs != 0) {
          listener.onPositionDiscontinuity();
        }
      }

      if (configuration.outputMode == OUTPUT_MODE_PCM) {
        /* 统计decoder往audiotrack写的buffer总字节数 */
        submittedPcmBytes += buffer.remaining();
      } else {
        submittedEncodedFrames += framesPerEncodedSample * encodedAccessUnitCount;
      }

      /* 将decoder的输出buffer赋值给DefaultAudioSink类维护的输入buffer */
      inputBuffer = buffer;
      inputBufferAccessUnitCount = encodedAccessUnitCount;
    }

    /* 往audiotrack中写入buffer数据 */
    processBuffers(presentationTimeUs);

    /* 如果buffer中的数据全部被消耗完,则返回true */
    if (!inputBuffer.hasRemaining()) {
      inputBuffer = null;
      inputBufferAccessUnitCount = 0;
      return true;
    }

    if (audioTrackPositionTracker.isStalled(getWrittenFrames())) {
      Log.w(TAG, "Resetting stalled audio track");
      flush();
      return true;
    }

    return false;
  }

函数非常长,但其实很好理解,重要的地方我都写了注释,首先,exoplayer需要创建一个audiotrack,我们先跟进到initializeAudioTrack函数:

  private void initializeAudioTrack() throws InitializationException {
    // If we're asynchronously releasing a previous audio track then we block until it has been
    // released. This guarantees that we cannot end up in a state where we have multiple audio
    // track instances. Without this guarantee it would be possible, in extreme cases, to exhaust
    // the shared memory that's available for audio track buffers. This would in turn cause the
    // initialization of the audio track to fail.
    releasingConditionVariable.block();

    /* 创建audiotrack */
    audioTrack = buildAudioTrack();
	...
    audioTrackPositionTracker.setAudioTrack(
        audioTrack,
        /* isPassthrough= */ configuration.outputMode == OUTPUT_MODE_PASSTHROUGH,
        configuration.outputEncoding,
        configuration.outputPcmFrameSize,
        configuration.bufferSize);
    setVolumeInternal();
	...
	/* 这里置为true将会去设置track参数 */
    startMediaTimeUsNeedsInit = true;
  }

只关注重点代码buildAudioTrack():

  private AudioTrack buildAudioTrack() throws InitializationException {
    try {
      /* 这里去创建audiotrack */
      return Assertions.checkNotNull(configuration)
          .buildAudioTrack(tunneling, audioAttributes, audioSessionId);
    } catch (InitializationException e) {
      maybeDisableOffload();
      throw e;
    }
  }

继续跟进到buildAudioTrack:

    public AudioTrack buildAudioTrack(
        boolean tunneling, AudioAttributes audioAttributes, int audioSessionId)
        throws InitializationException {
      AudioTrack audioTrack;
      try {
      	/* 创建audiotrack */
        audioTrack = createAudioTrack(tunneling, audioAttributes, audioSessionId);
      } catch (UnsupportedOperationException e) {
        throw new InitializationException(
            AudioTrack.STATE_UNINITIALIZED, outputSampleRate, outputChannelConfig, bufferSize);
      }
	  
	  /* 状态检查,如果异常直接release */
      int state = audioTrack.getState();
      if (state != STATE_INITIALIZED) {
        try {
          audioTrack.release();
        } catch (Exception e) {
          // The track has already failed to initialize, so it wouldn't be that surprising if
          // release were to fail too. Swallow the exception.
        }
        throw new InitializationException(state, outputSampleRate, outputChannelConfig, bufferSize);
      }
      return audioTrack;
    }

buildAudioTrack这个函数很好理解,就是创建audiotrack然后检查下状态是否正常,我们继续来看createAudioTrack:

    private AudioTrack createAudioTrack(
        boolean tunneling, AudioAttributes audioAttributes, int audioSessionId) {
      if (Util.SDK_INT >= 29) {
        return createAudioTrackV29(tunneling, audioAttributes, audioSessionId);
      } else if (Util.SDK_INT >= 21) {
        return createAudioTrackV21(tunneling, audioAttributes, audioSessionId);
      } else {
        return createAudioTrackV9(audioAttributes, audioSessionId);
      }
    }

这里会根据Android SDK的版本来创建audiotrack,里面的都是标准代码实现,就不去跟踪了,audiotrack的创建过程封装的确实有点多,不过代码总体很好理解,回到最上面的handleBuffer,在创建了audiotrack之后,会去设置track的参数,确认倍速倍频之类的,重要的是下面的是否确认播放段代码:

      /* 是否开始播放数据 */
      if (playing) {
        play();
      }

是否还记得playing变量什么时候置为true的?就是在doSomeWork那个主循环的
startRenderers()函数中层层下调,在DefaulfAudioSink的play中去置位的:

  @Override
  public void play() {
    playing = true;
    if (isAudioTrackInitialized()) {
      audioTrackPositionTracker.start();
      audioTrack.play();
    }
  }

继续回到handleBuffer,你会发现有一个inputBuffer数组,注意,这个变量是DefaulfAudioSink维护的,实际上外面的outputbuffer就是赋值给它的:

      /* 将decoder的输出buffer赋值给DefaultAudioSink类维护的输入buffer */
      inputBuffer = buffer;
      inputBufferAccessUnitCount = encodedAccessUnitCount;

这里的buffer就是我们前面通过MediaCode.releaseOutputBuffer()拿到的buffer。
最后两个注释一目了然,调用processBuffers函数往audiotrack中写数据,如果buffer中的数据全部写入到audiotrack,则置空inputBuffer,否则下次进来继续调用processBuffers往track中写数据。重点关注下processBuffers是如何往track中写数据的:

  private void processBuffers(long avSyncPresentationTimeUs) throws WriteException {
    /* activeAudioProcessors.length值为0 */
    int count = activeAudioProcessors.length;
    int index = count;
    while (index >= 0) {
      /* 这里代表的意思:input = inputBuffer */
      ByteBuffer input = index > 0 ? outputBuffers[index - 1]
          : (inputBuffer != null ? inputBuffer : AudioProcessor.EMPTY_BUFFER);
      if (index == count) {
        /* 往audiotrack中写数据 */
        writeBuffer(input, avSyncPresentationTimeUs);
      } else {
        AudioProcessor audioProcessor = activeAudioProcessors[index];
        if (index > drainingAudioProcessorIndex) {
          audioProcessor.queueInput(input);
        }
        ByteBuffer output = audioProcessor.getOutput();
        outputBuffers[index] = output;
        if (output.hasRemaining()) {
          // Handle the output as input to the next audio processor or the AudioTrack.
          index++;
          continue;
        }
      }

      if (input.hasRemaining()) {
        // The input wasn't consumed and no output was produced, so give up for now.
        return;
      }

      // Get more input from upstream.
      index--;
    }
  }

通过打印确认activeAudioProcessors.length值为0,所以只需要关注writeBuffer就可以了:

  private void writeBuffer(ByteBuffer buffer, long avSyncPresentationTimeUs) throws WriteException {
    /* 确认buffer中是否有数据 */
    if (!buffer.hasRemaining()) {
      return;
    }
    /* 这个if-else总的意思就是确保:outputBuffer = buffer */
    if (outputBuffer != null) {
      Assertions.checkArgument(outputBuffer == buffer);
    } else {
      outputBuffer = buffer;
      if (Util.SDK_INT < 21) {
        int bytesRemaining = buffer.remaining();
        if (preV21OutputBuffer == null || preV21OutputBuffer.length < bytesRemaining) {
          preV21OutputBuffer = new byte[bytesRemaining];
        }
        int originalPosition = buffer.position();
        buffer.get(preV21OutputBuffer, 0, bytesRemaining);
        buffer.position(originalPosition);
        preV21OutputBufferOffset = 0;
      }
    }
    /* buffer中可copy数据总量 */
    int bytesRemaining = buffer.remaining();
    /* 实际写入audiotrack中的数量 */
    int bytesWritten = 0;
    if (Util.SDK_INT < 21) { // outputMode == OUTPUT_MODE_PCM.
      // Work out how many bytes we can write without the risk of blocking.
      /* 可以往audiotrack写入的buffer数据总量 */
      int bytesToWrite = audioTrackPositionTracker.getAvailableBufferSize(writtenPcmBytes);
      if (bytesToWrite > 0) {
        /* 最终确认可以往audiotrack写入的buffer数据总量 */
        bytesToWrite = min(bytesRemaining, bytesToWrite);
        /* 实际上往audiotrack写入的数据量 */
        bytesWritten = audioTrack.write(preV21OutputBuffer, preV21OutputBufferOffset, bytesToWrite);
        if (bytesWritten > 0) {
          preV21OutputBufferOffset += bytesWritten;
          buffer.position(buffer.position() + bytesWritten);
        }
      }
    } else if (tunneling) {
      Assertions.checkState(avSyncPresentationTimeUs != C.TIME_UNSET);
      bytesWritten =
          writeNonBlockingWithAvSyncV21(
              audioTrack, buffer, bytesRemaining, avSyncPresentationTimeUs);
    } else {
      bytesWritten = writeNonBlockingV21(audioTrack, buffer, bytesRemaining);
    }

    lastFeedElapsedRealtimeMs = SystemClock.elapsedRealtime();

    if (bytesWritten < 0) {
      boolean isRecoverable = isAudioTrackDeadObject(bytesWritten);
      if (isRecoverable) {
        maybeDisableOffload();
      }
      throw new WriteException(bytesWritten);
    }

    if (isOffloadedPlayback(audioTrack)) {
      // After calling AudioTrack.setOffloadEndOfStream, the AudioTrack internally stops and
      // restarts during which AudioTrack.write will return 0. This situation must be detected to
      // prevent reporting the buffer as full even though it is not which could lead ExoPlayer to
      // sleep forever waiting for a onDataRequest that will never come.
      if (writtenEncodedFrames > 0) {
        isWaitingForOffloadEndOfStreamHandled = false;
      }

      // Consider the offload buffer as full if the AudioTrack is playing and AudioTrack.write could
      // not write all the data provided to it. This relies on the assumption that AudioTrack.write
      // always writes as much as possible.
      if (playing
          && listener != null
          && bytesWritten < bytesRemaining
          && !isWaitingForOffloadEndOfStreamHandled) {
        long pendingDurationMs =
            audioTrackPositionTracker.getPendingBufferDurationMs(writtenEncodedFrames);
        listener.onOffloadBufferFull(pendingDurationMs);
      }
    }

    if (configuration.outputMode == OUTPUT_MODE_PCM) {
      writtenPcmBytes += bytesWritten;
    }
    /* 二者相等表示buffer中的数据全部写入到audiotrack */
    if (bytesWritten == bytesRemaining) {
      if (configuration.outputMode != OUTPUT_MODE_PCM) {
        // When playing non-PCM, the inputBuffer is never processed, thus the last inputBuffer
        // must be the current input buffer.
        Assertions.checkState(buffer == inputBuffer);
        writtenEncodedFrames += framesPerEncodedSample * inputBufferAccessUnitCount;
      }
      /* 将该类的outputBuffer置为null */
      outputBuffer = null;
    }
  }

这个函数看着长,依然很好理解:中间那段就是消费者-生产者的判断缘由,因为Android5.1之前audiotrack没有非阻塞式的判断,所以需要手动的去判断,如果buffer中的数据能全部写入到audiotrack中,那么就认为此时的需大于供,可以继续往里面写,如果buffer中的数据不能全部写入到audiotrack中,还有剩余,说明供大于需,暂时不用往里面写,需要消耗。这就是外面drainOutputBuffer作为while判断条件的原因。

总结:
1.drainOutputBuffer目的是按生产者-消费者模式将解码出来的数据写入到audiotrack中;
2.第一次调用drainOutputBuffer时将会去创建audiotrack并开始尝试往里面写数据;

三、feedInputBuffer函数分析:
feedInputBuffer函数的目的是从媒体流中将数据读取出来写入到codec的buffer中供codec解码使用。下面是feedInputBuffer的重要函数源码:

  /**
   * @return Whether it may be possible to feed more input data.
   * @throws ExoPlaybackException If an error occurs feeding the input buffer.
   */
  private boolean feedInputBuffer() throws ExoPlaybackException {
    if (codec == null || codecDrainState == DRAIN_STATE_WAIT_END_OF_STREAM || inputStreamEnded) {
      return false;
    }

    /* 找到一个可用的inputbuff */
    if (inputIndex < 0) {
      inputIndex = codecAdapter.dequeueInputBufferIndex();
      if (inputIndex < 0) {
        return false;
      }
      buffer.data = getInputBuffer(inputIndex);
      buffer.clear();
    }
	...
    @SampleStream.ReadDataResult
    /* 从媒体文件中将数据写入到buffer */
    int result = readSource(formatHolder, buffer, /* formatRequired= */ false);
	...
    /* 校准第一次获得的输入buffer的时间戳 */
    onQueueInputBuffer(buffer);
    try {
      if (bufferEncrypted) {
        codecAdapter.queueSecureInputBuffer(
            inputIndex, /* offset= */ 0, buffer.cryptoInfo, presentationTimeUs, /* flags= */ 0);
      } else {
        /* 输入buffer入列 */
        codecAdapter.queueInputBuffer(
            inputIndex, /* offset= */ 0, buffer.data.limit(), presentationTimeUs, /* flags= */ 0);
      }
    } catch (CryptoException e) {
      throw createRendererException(e, inputFormat);
    }

    /* 重置输入buffer */
    resetInputBuffer();
    codecReceivedBuffers = true;
    codecReconfigurationState = RECONFIGURATION_STATE_NONE;
    /* 统计读取到decoder中的inbuffer总次数 */
    decoderCounters.inputBufferCount++;
    return true;
  }

函数源码非常长,通过调试后发现实际上很好理解,关键代码都已经打上了注释。和dequeueOutputBuffer一样,先从codec中拿到一个可以写入数据的buffer的index,然后去查询获得这个buffer,接下来就是比较关键的一步,调用readSource从媒体文件中将数据写入到buffer,跟一下这个函数:

  @SampleStream.ReadDataResult
  protected final int readSource(
      FormatHolder formatHolder, DecoderInputBuffer buffer, boolean formatRequired) {
    @SampleStream.ReadDataResult
    /* 根据流类型去调用对应的readData */
    int result = Assertions.checkNotNull(stream).readData(formatHolder, buffer, formatRequired);
    if (result == C.RESULT_BUFFER_READ) {
      if (buffer.isEndOfStream()) {
        readingPositionUs = C.TIME_END_OF_SOURCE;
        return streamIsFinal ? C.RESULT_BUFFER_READ : C.RESULT_NOTHING_READ;
      }
      buffer.timeUs += streamOffsetUs;
      readingPositionUs = max(readingPositionUs, buffer.timeUs);
    } else if (result == C.RESULT_FORMAT_READ) {
      Format format = Assertions.checkNotNull(formatHolder.format);
      if (format.subsampleOffsetUs != Format.OFFSET_SAMPLE_RELATIVE) {
        format =
            format
                .buildUpon()
                .setSubsampleOffsetUs(format.subsampleOffsetUs + streamOffsetUs)
                .build();
        formatHolder.format = format;
      }
    }
    return result;
  }

我们只关注readData,我调试的代码是exoplayer的官方demo,播放的是一个缓存好的DASH流,通过debug看到对应的实现类是ChunkSampleStream,看一下readData函数:

  @Override
  public int readData(FormatHolder formatHolder, DecoderInputBuffer buffer,
      boolean formatRequired) {
    if (isPendingReset()) {
      return C.RESULT_NOTHING_READ;
    }
    if (canceledMediaChunk != null
        && canceledMediaChunk.getFirstSampleIndex(/* trackIndex= */ 0)
            <= primarySampleQueue.getReadIndex()) {
      // Don't read into chunk that's going to be discarded.
      // TODO: Support splicing to allow this. See [internal b/161130873].
      return C.RESULT_NOTHING_READ;
    }
    maybeNotifyPrimaryTrackFormatChanged();
	/* 从这里读取数据 */
    return primarySampleQueue.read(formatHolder, buffer, formatRequired, loadingFinished);
  }

继续跟进:

  @CallSuper
  public int read(
      FormatHolder formatHolder,
      DecoderInputBuffer buffer,
      boolean formatRequired,
      boolean loadingFinished) {
    int result =
        readSampleMetadata(formatHolder, buffer, formatRequired, loadingFinished, extrasHolder);
    if (result == C.RESULT_BUFFER_READ && !buffer.isEndOfStream() && !buffer.isFlagsOnly()) {
      /* 从Queue中读取数据 */
      sampleDataQueue.readToBuffer(buffer, extrasHolder);
    }
    return result;
  }

再看下readToBuffer:

  public void readToBuffer(DecoderInputBuffer buffer, SampleExtrasHolder extrasHolder) {
    // Read encryption data if the sample is encrypted.
    if (buffer.isEncrypted()) {
      readEncryptionData(buffer, extrasHolder);
    }
    // Read sample data, extracting supplemental data into a separate buffer if needed.
    if (buffer.hasSupplementalData()) {
      // If there is supplemental data, the sample data is prefixed by its size.
      scratch.reset(4);
      readData(extrasHolder.offset, scratch.getData(), 4);
      int sampleSize = scratch.readUnsignedIntToInt();
      extrasHolder.offset += 4;
      extrasHolder.size -= 4;

      // Write the sample data.
      buffer.ensureSpaceForWrite(sampleSize);
      readData(extrasHolder.offset, buffer.data, sampleSize);
      extrasHolder.offset += sampleSize;
      extrasHolder.size -= sampleSize;

      // Write the remaining data as supplemental data.
      buffer.resetSupplementalData(extrasHolder.size);
      readData(extrasHolder.offset, buffer.supplementalData, extrasHolder.size);
    } else {
      /* 将数据写入到buffer中 */
      // Write the sample data.
      buffer.ensureSpaceForWrite(extrasHolder.size);
      readData(extrasHolder.offset, buffer.data, extrasHolder.size);
    }
  }

写数据的流程exoplayer封装的比较严实,我们这里就不去深究,回到feedInputBuffer函数,继续往下看,在拿到读数据的结果result后,会去做一些判断,包括读取数据是否为空,码流类型是否改变,以及调整第一次读到的输入buffer的时间戳等等(具体的可以去看源码),接下来就是调用MediaCodec.queueInputBuffer()将buffer入列到解码器中就行,后面的都是一些收尾的工作。

总结:
feedInputBuffer函数原理更简单,就是将媒体流中的数据写入到codec的输入buffer中就行了,剩下的就交由codec来完成了。

  • 4
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值