Android 系统之 AudioTrack 回顾小结

AudioTrack

AudioTrack: It allows streaming of PCM audio buffers to the audio sink for playback
An AudioTrack instance can operate under two modes: static or streaming.

Location:
Android应用框架层

android/frameworks/base/media/java/android/media/AudioTrack.java 
//对外提供的接口,还有包括AudioManager、AudioRecorder等对外的接口  

本地框架层

android/frameworks/av/media/libaudioclient/AudioTrack.cpp    

1. MODE_STATIC 和 MODE_STREAM

android/frameworks/base/media/java/android/media/AudioTrack.java


     // keep these values in sync with android_media_AudioTrack.cpp
     /**
      * Creation mode where audio data is transferred from Java to the native layer
      * only once before the audio starts playing.
      */
     public static final int MODE_STATIC = 0; 
     /**
      * Creation mode where audio data is streamed from Java to the native layer
      * as the audio is playing.
      */
     public static final int MODE_STREAM = 1;//最常见的形式audio数据被一部分一部分放到buffer中

MODE_STATIC
开始创建的时候,就把音频数据放到一个固定的buffer,然后直接传给audiotrack,后续就不用一次次得write了。AudioTrack会自己播放这个buffer中的数据。对应Native层的fastTrack,也就是latency playback 播放模式,FLAG对应于AUDIO_OUTPUT_FLAG_FAST;和Deep buffer Playback方式类似,但是它所分配的buffer更小些,并且在ADSP侧只做很少或者基本不做处理, 主要是播放一些对延迟要求较高的音频, 通常是Touchtone, gaming audio…

The static mode should be chosen when dealing with short sounds that fit in memory and that need to be played with the smallest latency possible.

MODE_STREAM:

In Streaming mode, the application writes a continuous stream of data to the AudioTrack, using write() methods.
These are blocking and return when the data has been transferred from the Java layer to the native layer and queued for playback. The streaming mode is most useful when playing blocks of audio data that for instance are:
1 too big to fit in memory because of the duration of the sound to play
2 too big to fit in memory because of the characteristics of the audio data(high sampling rate, bits per sample …)
3 received or generated while previously queued audio is playing.

    /**
      * Returns the current performance mode of the {@link AudioTrack}.
      *
      * @return one of {@link AudioTrack#PERFORMANCE_MODE_NONE},
      * {@link AudioTrack#PERFORMANCE_MODE_LOW_LATENCY},
      * or {@link AudioTrack#PERFORMANCE_MODE_POWER_SAVING}.
      * Use {@link AudioTrack.Builder#setPerformanceMode}
      * in the {@link AudioTrack.Builder} to enable a performance mode.
      * @throws IllegalStateException if track is not initialized.
      */
     public @PerformanceMode int getPerformanceMode() {
         final int flags = native_get_flags();
         if ((flags & AUDIO_OUTPUT_FLAG_FAST) != 0) {
             return PERFORMANCE_MODE_LOW_LATENCY;
         } else if ((flags & AUDIO_OUTPUT_FLAG_DEEP_BUFFER) != 0) {
             return PERFORMANCE_MODE_POWER_SAVING;
         } else {
             return PERFORMANCE_MODE_NONE;
         }
     }

2. audio buffer

Upon creation, an AudioTrack object initializes its associated audio buffer.The size of this buffer, specified during the construction, determines how long an AudioTrack can play before running out of data.
For an AudioTrack using the static mode, this size is the maximum size of the sound that can be played from it.
For the streaming mode, data will be written to the audio sink in chunks of sizes less than or equal to the total buffer size.

getBufferSizeInFrames() : actual size in frames of the buffer created, which determines the minimum frequency to write to the streaming AudioTrack to avoid underrun.
getMinBufferSize() : determine the estimated minimum buffer size for an AudioTrack instance in streaming mode

3. 应用层AudioTrack的使用

应用层可以直接调用AudioTrack相关的api去播放pcm音频数据

 // constants for test
 final String TEST_NAME = "testSetBufferSize";
 final int TEST_SR = 44100;//采样频率
 final int TEST_CONF = AudioFormat.CHANNEL_OUT_STEREO; //双声道
 final int TEST_FORMAT = AudioFormat.ENCODING_PCM_16BIT;//一个采样点16比特
 final int TEST_MODE = AudioTrack.MODE_STREAM;//通过write方式把数据一次一次得写到audiotrack的buffer中,比如通过编解码得到PCM数据,然后write到audiotrack
 final int TEST_STREAM_TYPE = AudioManager.STREAM_MUSIC;

// -------- initialization --------------
 int minBuffSize = AudioTrack.getMinBufferSize(TEST_SR, TEST_CONF, TEST_FORMAT);
 AudioTrack track = new AudioTrack(TEST_STREAM_TYPE, TEST_SR, TEST_CONF, TEST_FORMAT,
           minBuffSize, TEST_MODE);
 track.play() ;//开始
 ...
 track.write(bytes_pkg, 0, bytes_pkg.length) ;//往track中写数据
 track.stop();//停止播放
 track.release();//释放底层资源。

注意:如果AudioTrack的创建形式是MODE_STATIC, 在play()执行前要先write() data. 而如果是MODE_STREAM,最初的data你可以在play前通过构造函数写到 bufferSizeInBytes中
If you don’t call write() first, or if you call write() but with an insufficient amount of
* data, then the track will be in underrun state at play(). In this case,
* playback will not actually start playing until the data path is filled to a
* device-specific minimum level. This requirement for the path to be filled
* to a minimum level is also true when resuming audio playback after calling stop().
* Similarly the buffer will need to be filled up again after
* the track underruns due to failure to call write() in a timely manner with sufficient data.
* For portability, an application should prime the data path to the maximum allowed
* by writing data until the write() method returns a short transfer count.
* This allows play() to start immediately, and reduces the chance of underrun.

更多的使用例子:
android/frameworks/base/media/jni/soundpool/SoundPool.cpp

关于write()

Writes the audio data to the audio sink for playback (streaming mode),
* or copies audio data for later playback (static buffer mode).

In streaming mode, the blocking behavior depends on the write mode. If the write mode is
WRITE_BLOCKING, the write will normally block until all the data has been enqueued for playback, and will return a full transfer count.
However, if the write mode is
*WRITE_NON_BLOCKING, or the track is stopped or paused on entry, or another thread interrupts the write by calling stop or pause, or an I/O error occurs during the write, then the write may return a short transfer count. In static buffer mode, copies the data to the buffer starting at offset 0,and the write mode is ignored.

关于StreamType

和Android中的AudioManager有关系,涉及到手机上的音频管理策略,具体等将 音频策略时详细解说
Android将系统的声音分为以下几类:

enum stream_type {
DEFAULT =-1, //
VOICE_CALL = 0, // 通话声
SYSTEM = 1, // 系统声, 低电,锁屏,开关机提示音等
RING = 2, // 来电铃声
MUSIC = 3, // 媒体播放声
ALARM = 4, // 闹铃
NOTIFICATION = 5, // 短信与通知
BLUETOOTH_SCO = 6, // 蓝牙通话
ENFORCED_AUDIBLE = 7, // 强制发声
DTMF = 8, // 拨号盘的按键音
TTS = 9, // 文本转语音, text to speech
FM = 10, // 调频声
NUM_STREAM_TYPES
};

getMinBufferSize()

static public int getMinBufferSize(int sampleRateInHz, int channelConfig, int audioFormat)
int size = native_get_min_buff_size(sampleRateInHz, channelCount, audioFormat);

getMinBufferSize() 接口,字面意思是返回最小数据缓冲区的大小,它是声音能正常播放的最低保障,从函数参数来看,返回值取决于采样率、采样深度、声道数这三个属性。MODE_STREAM 模式下,应用程序重点参考其返回值然后确定分配多大的数据缓冲区。如果数据缓冲区分配得过小,那么播放声音会频繁遭遇 underrun,underrun 是指生产者(AudioTrack)提供数据的速度跟不上消费者(AudioFlinger::PlaybackThread)消耗数据的速度,反映到现实的后果就是声音断续卡顿,严重影响听觉体验。

4. Framework native层AudioTrack的创建

使用MediaPlayer播放音视频时,会创建AudioTrack对象用于播放音频数据
那么创建AudioTrack对象是什么时候创建的呢?

status_t MediaPlayerService::Client::setDataSource()
sp MediaPlayerService::Client::setDataSource_pre()

    // create the right type of player
740     sp<MediaPlayerBase> p = createPlayer(playerType);
741     if (p == NULL) {
742         return p;
743     }
        ......
782     if (!p->hardwareOutput()) {
783         mAudioOutput = new AudioOutput(mAudioSessionId, IPCThreadState::self()->getCallingUid(),
784                 mPid, mAudioAttributes);
            //调用播放器的setAudioSink函数,把mAudioOutput对象赋值给相应播放器的AudioSink对象
785         static_cast<MediaPlayerInterface*>(p.get())->setAudioSink(mAudioOutput);
786     }
787 
788     return p;

android/frameworks/av/media/libmediaplayerservice/nuplayer/NuPlayerRenderer.cpp

status_t NuPlayer::Renderer::onOpenAudioSink(){
......
1997              ALOGV("openAudioSink: try to open AudioSink in offload mode");
1998              uint32_t offloadFlags = flags;
1999              offloadFlags |= AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD;
2000              offloadFlags &= ~AUDIO_OUTPUT_FLAG_DEEP_BUFFER;
2001              audioSinkChanged = true;
2002              mAudioSink->close();
2003  
2004              err = mAudioSink->open(
2005                      sampleRate,
2006                      numChannels,
2007                      (audio_channel_mask_t)channelMask,
2008                      audioFormat,
2009                      0 /* bufferCount - unused */,
2010                      &NuPlayer::Renderer::AudioSinkCallback,
2011                      this,
2012                      (audio_output_flags_t)offloadFlags,
2013                      &offloadInfo);
......
2047      if (!offloadOnly && !offloadingAudio()) {
2048          ALOGV("openAudioSink: open AudioSink in NON-offload mode");
2049          uint32_t pcmFlags = flags;
2050          pcmFlags &= ~AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD;
}
......

2090          status_t err = mAudioSink->open(
2091                      sampleRate,
2092                      numChannels,
2093                      (audio_channel_mask_t)channelMask,
2094                      AVNuUtils::get()->getPCMFormat(format),
2095                      0 /* bufferCount - unused */,
2096                      mUseAudioCallback ? &NuPlayer::Renderer::AudioSinkCallback : NULL,
2097                      mUseAudioCallback ? this : NULL,
2098                      (audio_output_flags_t)pcmFlags,
2099                      NULL,
2100                      doNotReconnect,
2101                      frameCount);
2102          if (err != OK) {
2103              ALOGW("openAudioSink: non offloaded open failed status: %d", err);
2104              mAudioSink->close();
2105              mCurrentPcmInfo = AUDIO_PCMINFO_INITIALIZER;
2106              return err;
2107          }
2108          mCurrentPcmInfo = info;
2109          if (!mPaused) { // for preview mode, don't start if paused
2110              mAudioSink->start();
2111          }
2112      }

AudioTrack是在AudioOutput的open()中创建的,AudioOutput类是MediaPlayerService的内部类,还是继承自MediaPlayerBase::AudioSink的派生类
android/frameworks/av/media/libmediaplayerservice/MediaPlayerService.cpp

status_t MediaPlayerService::AudioOutput::open(
1862         uint32_t sampleRate, int channelCount, audio_channel_mask_t channelMask,
1863         audio_format_t format, int bufferCount,
1864         AudioCallback cb, void *cookie,
1865         audio_output_flags_t flags,
1866         const audio_offload_info_t *offloadInfo,
1867         bool doNotReconnect,
1868         uint32_t suggestedFrameCount)
1869 {
......
1966     sp<AudioTrack> t;
1967     CallbackData *newcbd = NULL;
1968 
1969     // We don't attempt to create a new track if we are recycling an
1970     // offloaded track. But, if we are recycling a non-offloaded or we
1971     // are switching where one is offloaded and one isn't then we create
1972     // the new track in advance so that we can read additional stream info
1973 
1974     if (!(reuse && bothOffloaded)) {
1975         ALOGV("creating new AudioTrack");
1976 
1977         if (mCallback != NULL) {
1978             newcbd = new CallbackData(this);
1979             t = new AudioTrack(
1980                     mStreamType,
1981                     sampleRate,
1982                     format,
1983                     channelMask,
1984                     frameCount,
1985                     flags,
1986                     CallbackWrapper,
1987                     newcbd,
1988                     0,  // notification frames
1989                     mSessionId,
1990                     AudioTrack::TRANSFER_CALLBACK,
1991                     offloadInfo,
1992                     mUid,
1993                     mPid,
1994                     mAttributes,
1995                     doNotReconnect);
1996         } else {
1997             // TODO: Due to buffer memory concerns, we use a max target playback speed
1998             // based on mPlaybackRate at the time of open (instead of kMaxRequiredSpeed),
1999             // also clamping the target speed to 1.0 <= targetSpeed <= kMaxRequiredSpeed.
2000             const float targetSpeed =
2001                     std::min(std::max(mPlaybackRate.mSpeed, 1.0f), kMaxRequiredSpeed);
2002             ALOGW_IF(targetSpeed != mPlaybackRate.mSpeed,
2003                     "track target speed:%f clamped from playback speed:%f",
2004                     targetSpeed, mPlaybackRate.mSpeed);
2005             t = new AudioTrack(
2006                     mStreamType,
2007                     sampleRate,
2008                     format,
2009                     channelMask,
2010                     frameCount,
2011                     flags,
2012                     NULL, // callback
2013                     NULL, // user data
2014                     0, // notification frames
2015                     mSessionId,
2016                     AudioTrack::TRANSFER_DEFAULT,
2017                     NULL, // offload info
2018                     mUid,
2019                     mPid,
2020                     mAttributes,
2021                     doNotReconnect,
2022                     targetSpeed);
2023         }

android/frameworks/av/media/libmediaplayerservice/nuplayer/NuPlayerRenderer.cpp
//将解码之后的音视频原始数据通知显示端并作缓存和同步
165 void NuPlayer::Renderer::queueBuffer(
166 bool audio,
167 const sp &buffer,
168 const sp &notifyConsumed) {
169 int64_t mediaTimeUs = -1;
170 buffer->meta()->findInt64(“timeUs”, &mediaTimeUs);
171 VTRACE_ASYNC_BEGIN(audio? “render-audio” : “render-video”, (int)mediaTimeUs);
172
173 sp msg = new AMessage(kWhatQueueBuffer, this);
174 msg->setInt32(“queueGeneration”, getQueueGeneration(audio));
175 msg->setInt32(“audio”, static_cast(audio));
176 msg->setObject(“buffer”, buffer);
177 msg->setMessage(“notifyConsumed”, notifyConsumed);
178 msg->post();
179 }

void NuPlayer::Renderer::onQueueBuffer(const sp &msg) {

void NuPlayer::Renderer::syncQueuesDone_l() {
http://makaidong.com/tocy/387821_1644854.html

AudioTrack is the hardware audio sink.
AudioSink is used for in-memory decode and potentially other applications where output doesn’t go straight to hardware.

注:AudioSink并不去做解码decoder的事情,它通过NuPlayRender获取解码后的数据,在open的时候获取AudioTrack对象mRecycledTrack或者创建新的AudioTrack,start时使用AudioTrack去播放解码后的PCM数据,并与AudioFlinger交互,将数据给到AudioFlinger做混音处理后由AudioFlinger交给AudioHardWare处理播放,AudioSink可以认为是一个“播放器”。

总之使用Mediaplayer播放音视频,实际由Nuplayer播放,使用NuplayerDecoder处理控制decoder,将解码的数据交给NuplayerRender通过queueBuffer获取原始音视频数据,对音视频原始数据缓存操作、音视频同步操作和其他辅助播放器控制的操作,在此过程中通过AudioSink去创建AudioTrack播放audio pcm数据

5. AudioTrack的处理

在android中播放声音可以用MediaPlayer和AudioTrack,MediaPlayer可以播放多种格式的声音文件,例如MP3,AAC,WAV,OGG,MIDI等, 而AudioTrack只能播放PCM数据流。但是两种方式本质上是一致的,MediaPlayer在播放音频时,在framework层还是会创建AudioTrack,把解码后的PCM数流传递给AudioTrack,由AudioFlinger进行混音,再将混音的数据给音频硬件播放出来。

几个音频概念

帧(frame):帧表示一个完整的声音单元,所谓的声音单元是指一个采样样本;如果是双声道,那么一个完整的声音单元就是 2 个样本,如果是 5.1 声道,那么一个完整的声音单元就是 6 个样本了。帧的大小(一个完整的声音单元的数据量)等于声道数乘以采样深度,即 frameSize = channelCount * bytesPerSample。
传输延迟(latency):Linux ALSA 把数据缓冲区划分为若干个块,dma 每传输完一个块上的数据即发出一个硬件中断,cpu 收到中断信号后,再配置 dma 去传输下一个块上的数据;一个块即是一个周期,周期大小(periodSize)即是一个数据块的帧数。再回到传输延迟(latency),传输延迟等于周期大小除以采样率,即 latency = periodSize / sampleRate。
音频重采样:音频重采样是指这样的一个过程——把一个采样率的数据转换为另一个采样率的数据。Android 原生系统上,音频硬件设备一般都工作在一个固定的采样率上(如 48 KHz),因此所有音轨数据都需要重采样到这个固定的采样率上,然后再输出。系统中可能存在多个音轨同时播放,而每个音轨的采样率可能是不一致的;比如在播放音乐的过程中,来了一个提示音,这时需要把音乐和提示音混音并输出到硬件设备,而音乐的采样率和提示音的采样率不一致,问题来了,如果硬件设备工作的采样率设置为音乐的采样率的话,那么提示音就会失真;因此最简单见效的解决方法是:硬件设备工作的采样率固定一个值,所有音轨在 AudioFlinger 都重采样到这个采样率上,混音后输出到硬件设备,保证所有音轨听起来都不失真。
AudioTrack::getMinFrameCount()

status_t AudioTrack::getMinFrameCount(
        size_t* frameCount,
        audio_stream_type_t streamType,
        uint32_t sampleRate)
{
    if (frameCount == NULL) {
        return BAD_VALUE;
    }

    // 通过 binder 调用到 AudioFlinger::sampleRate(),取得硬件设备的采样率
    uint32_t afSampleRate;
    status_t status;
    status = AudioSystem::getOutputSamplingRate(&afSampleRate, streamType);
    if (status != NO_ERROR) {
        ALOGE("Unable to query output sample rate for stream type %d; status %d",
                streamType, status);
        return status;
    }
    // 通过 binder 调用到 AudioFlinger::frameCount(),取得硬件设备的周期大小
    size_t afFrameCount;
    status = AudioSystem::getOutputFrameCount(&afFrameCount, streamType);
    if (status != NO_ERROR) {
        ALOGE("Unable to query output frame count for stream type %d; status %d",
                streamType, status);
        return status;
    }
    // 通过 binder 调用到 AudioFlinger::latency(),取得硬件设备的传输延迟
    uint32_t afLatency;
    status = AudioSystem::getOutputLatency(&afLatency, streamType);
    if (status != NO_ERROR) {
        ALOGE("Unable to query output latency for stream type %d; status %d",
                streamType, status);
        return status;
    }

    // When called from createTrack, speed is 1.0f (normal speed).
    // This is rechecked again on setting playback rate (TODO: on setting sample rate, too).
    // 根据 afSampleRate、afFrameCount、afLatency 计算出一个最低帧数
    *frameCount = calculateMinFrameCount(afLatency, afFrameCount, afSampleRate, sampleRate, 1.0f);

    // The formula above should always produce a non-zero value under normal circumstances:
    // AudioTrack.SAMPLE_RATE_HZ_MIN <= sampleRate <= AudioTrack.SAMPLE_RATE_HZ_MAX.
    // Return error in the unlikely event that it does not, as that's part of the API contract.
    if (*frameCount == 0) {
        ALOGE("AudioTrack::getMinFrameCount failed for streamType %d, sampleRate %u",
                streamType, sampleRate);
        return BAD_VALUE;
    }
    ALOGV("getMinFrameCount=%zu: afFrameCount=%zu, afSampleRate=%u, afLatency=%u",
            *frameCount, afFrameCount, afSampleRate, afLatency);
    return NO_ERROR;
}

// 有兴趣的可以研究 calculateMinFrameCount() 的实现,需大致了解重采样算法原理
static size_t calculateMinFrameCount(
        uint32_t afLatencyMs, uint32_t afFrameCount, uint32_t afSampleRate,
        uint32_t sampleRate, float speed)
{
    // Ensure that buffer depth covers at least audio hardware latency
    uint32_t minBufCount = afLatencyMs / ((1000 * afFrameCount) / afSampleRate);
    if (minBufCount < 2) {
        minBufCount = 2;
    }
    ALOGV("calculateMinFrameCount afLatency %u  afFrameCount %u  afSampleRate %u  "
            "sampleRate %u  speed %f  minBufCount: %u",
            afLatencyMs, afFrameCount, afSampleRate, sampleRate, speed, minBufCount);
    return minBufCount * sourceFramesNeededWithTimestretch(
            sampleRate, afFrameCount, afSampleRate, speed);
}

static inline size_t sourceFramesNeededWithTimestretch(
        uint32_t srcSampleRate, size_t dstFramesRequired, uint32_t dstSampleRate,
        float speed) {
    // required is the number of input frames the resampler needs
    size_t required = sourceFramesNeeded(srcSampleRate, dstFramesRequired, dstSampleRate);
    // to deliver this, the time stretcher requires:
    return required * (double)speed + 1 + 1; // accounting for rounding dependencies
}

// Returns the source frames needed to resample to destination frames.  This is not a precise
// value and depends on the resampler (and possibly how it handles rounding internally).
// Nevertheless, this should be an upper bound on the requirements of the resampler.
// If srcSampleRate and dstSampleRate are equal, then it returns destination frames, which
// may not be true if the resampler is asynchronous.
static inline size_t sourceFramesNeeded(
        uint32_t srcSampleRate, size_t dstFramesRequired, uint32_t dstSampleRate) {
    // +1 for rounding - always do this even if matched ratio (resampler may use phases not ratio)
    // +1 for additional sample needed for interpolation
    return srcSampleRate == dstSampleRate ? dstFramesRequired :
            size_t((uint64_t)dstFramesRequired * srcSampleRate / dstSampleRate + 1 + 1);
}

根据硬件设备的配置信息(采样率、周期大小、传输延迟)和音轨的采样率,计算出一个最低帧数

transfer_type

 /* How data is transferred to AudioTrack
 */
149     enum transfer_type {
150         TRANSFER_DEFAULT,   // not specified explicitly; determine from the other parameters
151         TRANSFER_CALLBACK,  // callback EVENT_MORE_DATA
152         TRANSFER_OBTAIN,    // call obtainBuffer() and releaseBuffer()
153         TRANSFER_SYNC,      // synchronous write()
154         TRANSFER_SHARED,    // shared memory
155     };

AudioTrack的创建

上面我们讲过Frameork Native层 AudioTrack的创建,如下:

1976 
1977         if (mCallback != NULL) {
1978             newcbd = new CallbackData(this);
1979             t = new AudioTrack(
1980                     mStreamType,
1981                     sampleRate,
1982                     format,
1983                     channelMask,
1984                     frameCount,
1985                     flags,
1986                     CallbackWrapper,
1987                     newcbd,
1988                     0,  // notification frames
1989                     mSessionId,
1990                     AudioTrack::TRANSFER_CALLBACK,//通过CALLBACK的方式将数据交给AudioTrack
1991                     offloadInfo,
1992                     mUid,
1993                     mPid,
1994                     mAttributes,
1995                     doNotReconnect);

那么先看AudioTrack.cpp的构造函数
android/frameworks/av/media/libaudioclient/AudioTrack.cpp


196 AudioTrack::AudioTrack(
197         audio_stream_type_t streamType,
198         uint32_t sampleRate,
199         audio_format_t format,
200         audio_channel_mask_t channelMask,
201         size_t frameCount,
202         audio_output_flags_t flags,
203         callback_t cbf,
204         void* user,
205         int32_t notificationFrames,
206         audio_session_t sessionId,
207         transfer_type transferType,
208         const audio_offload_info_t *offloadInfo,
209         uid_t uid,
210         pid_t pid,
211         const audio_attributes_t* pAttributes,
212         bool doNotReconnect,
213         float maxRequiredSpeed)
214     : mStatus(NO_INIT),
215       mState(STATE_STOPPED),
216       mPreviousPriority(ANDROID_PRIORITY_NORMAL),
217       mPreviousSchedulingGroup(SP_DEFAULT),
218       mPausedPosition(0),
219       mSelectedDeviceId(AUDIO_PORT_HANDLE_NONE),
220       mPortId(AUDIO_PORT_HANDLE_NONE)
221 {
222     mStatus = set(streamType, sampleRate, format, channelMask,
223             frameCount, flags, cbf, user, notificationFrames,
224             0 /*sharedBuffer*/, false /*threadCanCallJava*/, sessionId, transferType,
225             offloadInfo, uid, pid, pAttributes, doNotReconnect, maxRequiredSpeed);
226 }

继续AudioTrack::set

289 status_t AudioTrack::set(
290         audio_stream_type_t streamType,
291         uint32_t sampleRate,
292         audio_format_t format,
293         audio_channel_mask_t channelMask,
294         size_t frameCount,
295         audio_output_flags_t flags,
296         callback_t cbf,
297         void* user,
298         int32_t notificationFrames,
299         const sp<IMemory>& sharedBuffer,
300         bool threadCanCallJava,
301         audio_session_t sessionId,
302         transfer_type transferType,
303         const audio_offload_info_t *offloadInfo,
304         uid_t uid,
305         pid_t pid,
306         const audio_attributes_t* pAttributes,
307         bool doNotReconnect,
308         float maxRequiredSpeed)
309 {
310     ...
315     mThreadCanCallJava = threadCanCallJava;
316 
317     switch (transferType) {
318     case TRANSFER_DEFAULT:
319         if (sharedBuffer != 0) {
320             transferType = TRANSFER_SHARED;
321         } else if (cbf == NULL || threadCanCallJava) {
322             transferType = TRANSFER_SYNC;
323         } else {
324             transferType = TRANSFER_CALLBACK;  // 构造函数调用时cbf != NULL && sharedBuffer == 0,采用callback回调方式
325         }
326         break;
327     case TRANSFER_CALLBACK: //检查参数是否合理
328         if (cbf == NULL || sharedBuffer != 0) {
329             ALOGE("Transfer type TRANSFER_CALLBACK but cbf == NULL || sharedBuffer != 0");
330             return BAD_VALUE;
331         }
332         break;
333     case TRANSFER_OBTAIN:
334     case TRANSFER_SYNC:
335         if (sharedBuffer != 0) {
336             ALOGE("Transfer type TRANSFER_OBTAIN but sharedBuffer != 0");
337             return BAD_VALUE;
338         }
339         break;
340     case TRANSFER_SHARED:
341         if (sharedBuffer == 0) {
342             ALOGE("Transfer type TRANSFER_SHARED but sharedBuffer == 0");
343             return BAD_VALUE;
344         }
345         break;
346     default:
347         ALOGE("Invalid transfer type %d", transferType);
348         return BAD_VALUE;
349     }
350     mSharedBuffer = sharedBuffer;
351     mTransfer = transferType;
352     mDoNotReconnect = doNotReconnect;
353     
354     ALOGV_IF(sharedBuffer != 0, "sharedBuffer: %p, size: %zu", sharedBuffer->pointer(),
355             sharedBuffer->size());
356     
357     ALOGV("set() streamType %d frameCount %zu flags %04x", streamType, frameCount, flags);
358     
359     // invariant that mAudioTrack != 0 is true only after set() returns successfully
360     if (mAudioTrack != 0) {
361         ALOGE("Track already in use");
362         return INVALID_OPERATION;
363     }
364 
365     // handle default values first.
366     if (streamType == AUDIO_STREAM_DEFAULT) {
367         streamType = AUDIO_STREAM_MUSIC;
368     }
369     if (pAttributes == NULL) {
370         if (uint32_t(streamType) >= AUDIO_STREAM_PUBLIC_CNT) {
371             ALOGE("Invalid stream type %d", streamType);
372             return BAD_VALUE;
373         }
374         mStreamType = streamType;
375 
376     } else {
377         // stream type shouldn't be looked at, this track has audio attributes
378         memcpy(&mAttributes, pAttributes, sizeof(audio_attributes_t));
379         ALOGV("Building AudioTrack with attributes: usage=%d content=%d flags=0x%x tags=[%s]",
380                 mAttributes.usage, mAttributes.content_type, mAttributes.flags, mAttributes.tags);
381         mStreamType = AUDIO_STREAM_DEFAULT;
382         if ((mAttributes.flags & AUDIO_FLAG_HW_AV_SYNC) != 0) {
383             flags = (audio_output_flags_t)(flags | AUDIO_OUTPUT_FLAG_HW_AV_SYNC);
384         }
385         if ((mAttributes.flags & AUDIO_FLAG_LOW_LATENCY) != 0) {
386             flags = (audio_output_flags_t) (flags | AUDIO_OUTPUT_FLAG_FAST);
387         }
388         // check deep buffer after flags have been modified above
389         if (flags == AUDIO_OUTPUT_FLAG_NONE && (mAttributes.flags & AUDIO_FLAG_DEEP_BUFFER) != 0) {
390             flags = AUDIO_OUTPUT_FLAG_DEEP_BUFFER;
391         }
392     }
393 
394     // these below should probably come from the audioFlinger too...
395     if (format == AUDIO_FORMAT_DEFAULT) {
396         format = AUDIO_FORMAT_PCM_16_BIT;
397     } else if (format == AUDIO_FORMAT_IEC61937) { // HDMI pass-through?
398         mAttributes.flags |= AUDIO_OUTPUT_FLAG_IEC958_NONAUDIO;
399     }
400 
401     // validate parameters
402     if (!audio_is_valid_format(format)) {
403         ALOGE("Invalid format %#x", format);
404         return BAD_VALUE;
405     }
406     mFormat = format;
407 
408     if (!audio_is_output_channel(channelMask)) {
409         ALOGE("Invalid channel mask %#x", channelMask);
410         return BAD_VALUE;
411     }
412     mChannelMask = channelMask;
413     uint32_t channelCount = audio_channel_count_from_out_mask(channelMask);
414     mChannelCount = channelCount;
415 
416     // force direct flag if format is not linear PCM
417     // or offload was requested
418     if ((flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD)
419             || !audio_is_linear_pcm(format)) {
420         ALOGV( (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD)
421                     ? "Offload request, forcing to Direct Output"
422                     : "Not linear PCM, forcing to Direct Output");
423         flags = (audio_output_flags_t)
424                 // FIXME why can't we allow direct AND fast?
425                 ((flags | AUDIO_OUTPUT_FLAG_DIRECT) & ~AUDIO_OUTPUT_FLAG_FAST);
426     }
427 
428     // force direct flag if HW A/V sync requested
429     if ((flags & AUDIO_OUTPUT_FLAG_HW_AV_SYNC) != 0) {
430         flags = (audio_output_flags_t)(flags | AUDIO_OUTPUT_FLAG_DIRECT);
431     }
432 
433     if (flags & AUDIO_OUTPUT_FLAG_DIRECT) {
434         if (audio_has_proportional_frames(format)) {
435             mFrameSize = channelCount * audio_bytes_per_sample(format);
436         } else {
437             mFrameSize = sizeof(uint8_t);
438         }
439     } else {
440         ALOG_ASSERT(audio_has_proportional_frames(format));
441         mFrameSize = channelCount * audio_bytes_per_sample(format);
442         // createTrack will return an error if PCM format is not supported by server,
443         // so no need to check for specific PCM formats here
444     }
445 
446     // sampling rate must be specified for direct outputs
447     if (sampleRate == 0 && (flags & AUDIO_OUTPUT_FLAG_DIRECT) != 0) {
448         return BAD_VALUE;
449     }
450     mSampleRate = sampleRate;
451     mOriginalSampleRate = sampleRate;
452     mPlaybackRate = AUDIO_PLAYBACK_RATE_DEFAULT;
453     // 1.0 <= mMaxRequiredSpeed <= AUDIO_TIMESTRETCH_SPEED_MAX
454     mMaxRequiredSpeed = min(max(maxRequiredSpeed, 1.0f), AUDIO_TIMESTRETCH_SPEED_MAX);
455 
456     // Make copy of input parameter offloadInfo so that in the future:
457     //  (a) createTrack_l doesn't need it as an input parameter
458     //  (b) we can support re-creation of offloaded tracks
459     if (offloadInfo != NULL) {
460         mOffloadInfoCopy = *offloadInfo;
461         mOffloadInfo = &mOffloadInfoCopy;
462     } else {
463         mOffloadInfo = NULL;
464         memset(&mOffloadInfoCopy, 0, sizeof(audio_offload_info_t));
465     }
466 
467     mVolume[AUDIO_INTERLEAVE_LEFT] = 1.0f;
468     mVolume[AUDIO_INTERLEAVE_RIGHT] = 1.0f;
469     mSendLevel = 0.0f;
470     // mFrameCount is initialized in createTrack_l
471     mReqFrameCount = frameCount;
472     if (notificationFrames >= 0) {
473         mNotificationFramesReq = notificationFrames;
474         mNotificationsPerBufferReq = 0;
475     } else {
476         if (!(flags & AUDIO_OUTPUT_FLAG_FAST)) {
477             ALOGE("notificationFrames=%d not permitted for non-fast track",
478                     notificationFrames);
479             return BAD_VALUE;
480         }
481         if (frameCount > 0) {
482             ALOGE("notificationFrames=%d not permitted with non-zero frameCount=%zu",
483                     notificationFrames, frameCount);
484             return BAD_VALUE;
485         }
486         mNotificationFramesReq = 0;
487         const uint32_t minNotificationsPerBuffer = 1;
488         const uint32_t maxNotificationsPerBuffer = 8;
489         mNotificationsPerBufferReq = min(maxNotificationsPerBuffer,
490                 max((uint32_t) -notificationFrames, minNotificationsPerBuffer));
491         ALOGW_IF(mNotificationsPerBufferReq != (uint32_t) -notificationFrames,
492                 "notificationFrames=%d clamped to the range -%u to -%u",
493                 notificationFrames, minNotificationsPerBuffer, maxNotificationsPerBuffer);
494     }
495     mNotificationFramesAct = 0;
496     if (sessionId == AUDIO_SESSION_ALLOCATE) {
497         mSessionId = (audio_session_t) AudioSystem::newAudioUniqueId(AUDIO_UNIQUE_ID_USE_SESSION);
498     } else {
499         mSessionId = sessionId;
500     }
501     int callingpid = IPCThreadState::self()->getCallingPid();
502     int mypid = getpid();
503     if (uid == AUDIO_UID_INVALID || (callingpid != mypid)) {
504         mClientUid = IPCThreadState::self()->getCallingUid();
505     } else {
506         mClientUid = uid;
507     }
508     if (pid == -1 || (callingpid != mypid)) {
509         mClientPid = callingpid;
510     } else {
511         mClientPid = pid;
512     }
513     mAuxEffectId = 0;
514     mOrigFlags = mFlags = flags;
515     mCbf = cbf;
516     //创建AudioTrackThread
517     if (cbf != NULL) {
518         mAudioTrackThread = new AudioTrackThread(*this, threadCanCallJava);
519         mAudioTrackThread->run("AudioTrack", ANDROID_PRIORITY_AUDIO, 0 /*stack*/);
520         // thread begins in paused state, and will not reference us until start()
521     }
522 
523     // create the IAudioTrack
524     status_t status = createTrack_l();
525 
526     if (status != NO_ERROR) {
527         if (mAudioTrackThread != 0) {
528             mAudioTrackThread->requestExit();   // see comment in AudioTrack.h
529             mAudioTrackThread->requestExitAndWait();
530             mAudioTrackThread.clear();
531         }
532         return status;
533     }
534 
535     mStatus = NO_ERROR;
536     mUserData = user;
537     mLoopCount = 0;
538     mLoopStart = 0;
539     mLoopEnd = 0;
540     mLoopCountNotified = 0;
541     mMarkerPosition = 0;
542     mMarkerReached = false;
543     mNewPosition = 0;
544     mUpdatePeriod = 0;
545     mPosition = 0;
546     mReleased = 0;
547     mStartUs = 0;
548     AudioSystem::acquireAudioSessionId(mSessionId, mClientPid);
549     mSequence = 1;
550     mObservedSequence = mSequence;
551     mInUnderrun = false;
552     mPreviousTimestampValid = false;
553     mTimestampStartupGlitchReported = false;
554     mRetrogradeMotionReported = false;
555     mPreviousLocation = ExtendedTimestamp::LOCATION_INVALID;
556     mStartTs.mPosition = 0;
557     mUnderrunCountOffset = 0;
558     mFramesWritten = 0;
559     mFramesWrittenServerOffset = 0;
560     mFramesWrittenAtRestore = -1; // -1 is a unique initializer.
561     mVolumeHandler = new VolumeHandler();
562     return NO_ERROR;
563 }

这里设置了提供音频数据的回调函数,启动AudioTrackThread线程来提供音频数据;调用 createTrack_l() 创建 IAudioTrack

AudioTrackThread

2991 bool AudioTrack::AudioTrackThread::threadLoop()
2992 {
2993     {
2994         AutoMutex _l(mMyLock);
2995         if (mPaused) {
2996             mMyCond.wait(mMyLock);
2997             // caller will check for exitPending()
2998             return true;
2999         }
3000         if (mIgnoreNextPausedInt) {
3001             mIgnoreNextPausedInt = false;
3002             mPausedInt = false;
3003         }
3004         if (mPausedInt) {
3005             if (mPausedNs > 0) {
3006                 (void) mMyCond.waitRelative(mMyLock, mPausedNs);
3007             } else {
3008                 mMyCond.wait(mMyLock);
3009             }
3010             mPausedInt = false;
3011             return true;
3012         }
3013     }
3014     if (exitPending()) {
3015         return false;
3016     }
3017     nsecs_t ns = mReceiver.processAudioBuffer();
3018     switch (ns) {
3019     case 0:
3020         return true;
3021     case NS_INACTIVE:
3022         pauseInternal();
3023         return true;
3024     case NS_NEVER:
3025         return false;
3026     case NS_WHENEVER:
3027         // Event driven: call wake() when callback notifications conditions change.
3028         ns = INT64_MAX;
3029         // fall through
3030     default:
3031         LOG_ALWAYS_FATAL_IF(ns < 0, "processAudioBuffer() returned %" PRId64, ns);
3032         pauseInternal(ns);
3033         return true;
3034     }
3035 }

再来看mReceiver.processAudioBuffer()做什么,mReceiver就是AudioTrack本身

nsecs_t AudioTrack::processAudioBuffer()
{
    ......
    //如果还有音频流没有写完
    while (mRemainingFrames > 0) {

        Buffer audioBuffer;
        audioBuffer.frameCount = mRemainingFrames;
        size_t nonContig;
        //向共享内存块申请 一块可用的 buffer.
        status_t err = obtainBuffer(&audioBuffer, requested, NULL, &nonContig);
    ....
}

根据共享内存的读写指针的位置,判断当前处于什么样的状态,其中通过mCbf对象回调的状态值定义在 /frameworks/av/include/media/AudioTrack.h 的枚举event_type中:

    /* Events used by AudioTrack callback function (callback_t).
     * Keep in sync with frameworks/base/media/java/android/media/AudioTrack.java NATIVE_EVENT_*.
     */
    enum event_type {
        EVENT_MORE_DATA = 0,        //需要更多的数据

        EVENT_UNDERRUN = 1,         //处于低负荷状态,目前不能提供更多的数据

        EVENT_LOOP_END = 2,         //循环结束

        EVENT_MARKER = 3,           //播放音频流达到警戒线

        EVENT_NEW_POS = 4,          // Playback head is at a new position
                                    // (See setPositionUpdatePeriod()).
        EVENT_BUFFER_END = 5,       // Playback has completed for a static track.
        EVENT_NEW_IAUDIOTRACK = 6,  // IAudioTrack was re-created, either due to re-routing and
                                    // voluntary invalidation by mediaserver, or mediaserver crash.
        EVENT_STREAM_END = 7,       //播放结束
                                    // back (after stop is called) for an offloaded track.
#if 0   // FIXME not yet implemented
        EVENT_NEW_TIMESTAMP = 8,    // Delivered periodically and when there's a significant change
                                    // in the mapping from frame position to presentation time.
                                    // See AudioTimestamp for the information included with event.
#endif
    };

mCbf对象在AudioTrack的set()时传入

创建 IAudioTrack

在AudioTrack的set()函数中调用 createTrack_l() 创建 IAudioTrack
再来看 IAudioTrack的创建,需要先获取 audioFlinger的代理对象

// must be called with mLock held
status_t AudioTrack::createTrack_l()
{
    //通过AudioSystem来获取 audioFlinger的代理对象,这样就可以通过它来实现业务需求
    const sp<IAudioFlinger>& audioFlinger = AudioSystem::get_audio_flinger();
    if (audioFlinger == 0) {
        ALOGE("Could not get audioflinger");
        return NO_INIT;
    }

    if (mDeviceCallback != 0 && mOutput != AUDIO_IO_HANDLE_NONE) {
        AudioSystem::removeAudioDeviceCallback(mDeviceCallback, mOutput);
    }

    //这是一个很重要的变量,output相当于服务端线程的一个索引,
    //客户端会根据这个来找到相应的音频输出线程
    audio_io_handle_t output;
    audio_stream_type_t streamType = mStreamType;
    audio_attributes_t *attr = (mStreamType == AUDIO_STREAM_DEFAULT) ? &mAttributes : NULL;

    //1. 根据参数来确定output值,其实这里调用到服务端去了
    status_t status;
    status = AudioSystem::getOutputForAttr(attr, &output,
                                           (audio_session_t)mSessionId, &streamType, mClientUid,
                                           mSampleRate, mFormat, mChannelMask,
                                           mFlags, mSelectedDeviceId, mOffloadInfo);

    //... 获取服务端 输出线程处理此类音频流 各种音频参数是多少...

    // Client decides whether the track is TIMED (see below), but can only express a preference
    // for FAST.  Server will perform additional tests.
    if ((mFlags & AUDIO_OUTPUT_FLAG_FAST) && !((
            // either of these use cases:
            // use case 1: shared buffer
            (mSharedBuffer != 0) ||
            // use case 2: callback transfer mode
            (mTransfer == TRANSFER_CALLBACK) ||
            // use case 3: obtain/release mode
            (mTransfer == TRANSFER_OBTAIN)) &&
            // matching sample rate
            (mSampleRate == mAfSampleRate))) {
        ALOGW("AUDIO_OUTPUT_FLAG_FAST denied by client; transfer %d, track %u Hz, output %u Hz",
                mTransfer, mSampleRate, mAfSampleRate);
        // once denied, do not request again if IAudioTrack is re-created
        mFlags = (audio_output_flags_t) (mFlags & ~AUDIO_OUTPUT_FLAG_FAST);
    }

    // The client's AudioTrack buffer is divided into n parts for purpose of wakeup by server, where
    //  n = 1   fast track with single buffering; nBuffering is ignored
    //  n = 2   fast track with double buffering
    //  n = 2   normal track, (including those with sample rate conversion)
    //  n >= 3  very high latency or very small notification interval (unused).
    const uint32_t nBuffering = 2;

    mNotificationFramesAct = mNotificationFramesReq;

    //frameCount的计算 ...

    // trackFlags 的设置 .... 省略

    size_t temp = frameCount;   // temp may be replaced by a revised value of frameCount,
                                // but we will still need the original value also
    int originalSessionId = mSessionId;
    //2. 调用服务端 createTrack,返回 IAudioTrack 接口, 
    //在stream模式下sharedBuffer为空,output为AudioFlinger中播放线程的id号
    sp<IAudioTrack> track = audioFlinger->createTrack(streamType,
                                                      mSampleRate,
                                                      mFormat,
                                                      mChannelMask,
                                                      &temp,
                                                      &trackFlags,
                                                      mSharedBuffer,
                                                      output,
                                                      tid,
                                                      &mSessionId,
                                                      mClientUid,
                                                      &status);
    ALOGE_IF(originalSessionId != AUDIO_SESSION_ALLOCATE && mSessionId != originalSessionId,
            "session ID changed from %d to %d", originalSessionId, mSessionId);

    if (status != NO_ERROR) {
        ALOGE("AudioFlinger could not create track, status: %d", status);
        goto release;
    }
    ALOG_ASSERT(track != 0);

    // AudioFlinger now owns the reference to the I/O handle,
    // so we are no longer responsible for releasing it.
    //AudioFlinger创建Tack对象时会分配一块共享内存,这里得到这块共享内存的代理对象BpMemory
    //获取音频流控制块,IMemory是一个对共享内存操作的跨进程操作接口
    sp<IMemory> iMem = track->getCblk();
    if (iMem == 0) {
        ALOGE("Could not get control block");
        return NO_INIT;
    }
    void *iMemPointer = iMem->pointer();
    if (iMemPointer == NULL) {
        ALOGE("Could not get control block pointer");
        return NO_INIT;
    }
    // invariant that mAudioTrack != 0 is true only after set() returns successfully
    if (mAudioTrack != 0) {
        IInterface::asBinder(mAudioTrack)->unlinkToDeath(mDeathNotifier, this);
        mDeathNotifier.clear();
    }
    //将创建的Track代理对象、匿名共享内存代理对象保存到AudioTrack的成员变量中
    mAudioTrack = track;
    mCblkMemory = iMem;
    IPCThreadState::self()->flushCommands();

    //将首地址强转为 audio_track_cblk_t ,这个对象以后的文章在分析,这里只要知道这个对象是对共     //享内存进行管理的结构体
     //保存匿名共享内存的首地址。在匿名共享内存的头部存放了一个audio_track_cblk_t对象
    audio_track_cblk_t* cblk = static_cast<audio_track_cblk_t*>(iMemPointer);
    mCblk = cblk;
    // note that temp is the (possibly revised) value of frameCount
    if (temp < frameCount || (frameCount == 0 && temp == 0)) {
        // In current design, AudioTrack client checks and ensures frame count validity before
        // passing it to AudioFlinger so AudioFlinger should not return a different value except
        // for fast track as it uses a special method of assigning frame count.
        ALOGW("Requested frameCount %zu but received frameCount %zu", frameCount, temp);
    }
    frameCount = temp;

    mAwaitBoost = false;
    if (mFlags & AUDIO_OUTPUT_FLAG_FAST) {
        if (trackFlags & IAudioFlinger::TRACK_FAST) {
            ALOGV("AUDIO_OUTPUT_FLAG_FAST successful; frameCount %zu", frameCount);
            mAwaitBoost = true;
        } else {
            ALOGV("AUDIO_OUTPUT_FLAG_FAST denied by server; frameCount %zu", frameCount);
            // once denied, do not request again if IAudioTrack is re-created
            mFlags = (audio_output_flags_t) (mFlags & ~AUDIO_OUTPUT_FLAG_FAST);
        }
    }
    if (mFlags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) {
        if (trackFlags & IAudioFlinger::TRACK_OFFLOAD) {
            ALOGV("AUDIO_OUTPUT_FLAG_OFFLOAD successful");
        } else {
            ALOGW("AUDIO_OUTPUT_FLAG_OFFLOAD denied by server");
            mFlags = (audio_output_flags_t) (mFlags & ~AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD);
            // FIXME This is a warning, not an error, so don't return error status
            //return NO_INIT;
        }
    }
    if (mFlags & AUDIO_OUTPUT_FLAG_DIRECT) {
        if (trackFlags & IAudioFlinger::TRACK_DIRECT) {
            ALOGV("AUDIO_OUTPUT_FLAG_DIRECT successful");
        } else {
            ALOGW("AUDIO_OUTPUT_FLAG_DIRECT denied by server");
            mFlags = (audio_output_flags_t) (mFlags & ~AUDIO_OUTPUT_FLAG_DIRECT);
            // FIXME This is a warning, not an error, so don't return error status
            //return NO_INIT;
        }
    }
    // Make sure that application is notified with sufficient margin before underrun
    if (mSharedBuffer == 0 && audio_is_linear_pcm(mFormat)) {
        // Theoretically double-buffering is not required for fast tracks,
        // due to tighter scheduling.  But in practice, to accommodate kernels with
        // scheduling jitter, and apps with computation jitter, we use double-buffering
        // for fast tracks just like normal streaming tracks.
        if (mNotificationFramesAct == 0 || mNotificationFramesAct > frameCount / nBuffering) {
            mNotificationFramesAct = frameCount / nBuffering;
        }
    }

    // We retain a copy of the I/O handle, but don't own the reference
    mOutput = output;
    mRefreshRemaining = true;

    // Starting address of buffers in shared memory.  If there is a shared buffer, buffers
    // is the value of pointer() for the shared buffer, otherwise buffers points
    // immediately after the control block.  This address is for the mapping within client
    // address space.  AudioFlinger::TrackBase::mBuffer is for the server address space.
    void* buffers;
    if (mSharedBuffer == 0) {
        buffers = cblk + 1;
    } else {
        buffers = mSharedBuffer->pointer();
        if (buffers == NULL) {
            ALOGE("Could not get buffer pointer");
            return NO_INIT;
        }
    }

    mAudioTrack->attachAuxEffect(mAuxEffectId);
    // FIXME doesn't take into account speed or future sample rate changes (until restoreTrack)
    // FIXME don't believe this lie
    mLatency = mAfLatency + (1000*frameCount) / mSampleRate;

    mFrameCount = frameCount;
    // If IAudioTrack is re-created, don't let the requested frameCount
    // decrease.  This can confuse clients that cache frameCount().
    if (frameCount > mReqFrameCount) {
        mReqFrameCount = frameCount;
    }

    // reset server position to 0 as we have new cblk.
    mServer = 0;

    // 记住当 mSharedBuffer == 0 时时Stream模式
    if (mSharedBuffer == 0) {
        mStaticProxy.clear();
        mProxy = new AudioTrackClientProxy(cblk, buffers, frameCount, mFrameSize);
    } else {
        mStaticProxy = new StaticAudioTrackClientProxy(cblk, buffers, frameCount, mFrameSize);
        mProxy = mStaticProxy;
    }

    //通过mProxy设置音频的相关参数的初始值

    return status;
}

我们知道,AudioPolicyService启动时载入了系统支持的全部音频接口。而且打开了默认的音频输出。打开音频输出时,调用AudioFlinger::openOutput()函数为当前打开的音频输出接口创建一个PlaybackThread线程,同一时候为该线程分配一个全局唯一的audio_io_handle_t值,并以键值对的形式保存在AudioFlinger的成员变量mPlaybackThreads中。在这里首先依据音频參数通过调用AudioSystem::getOutput()函数得到当前音频输出接口的PlaybackThread线程id号。同一时候传递给createTrack函数用于创建Track。

AudioTrack在AudioFlinger中是以Track来管理的。不过由于它们之间是跨进程的关系,因此须要一个“桥梁”来维护,这个沟通的媒介是IAudioTrack。函数createTrack_l除了为AudioTrack在AudioFlinger中申请一个Track外。还会建立两者间IAudioTrack桥梁。

IAudioTrack建立了AudioTrack与AudioFlinger之间的关系,在static模式下,用于存放音频数据的匿名共享内存在AudioTrack这边创建。在stream播放模式下,匿名共享内存却是在AudioFlinger这边创建。这两种播放模式下创建的匿名共享内存是有差别的,stream模式下的匿名共享内存头部会创建一个audio_track_cblk_t对象,用于协调生产者AudioTrack和消费者AudioFlinger之间的步调。createTrack就是在AudioFlinger中创建一个Track对象。

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

安德路

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值