Android音频子系统(三)------AudioTrack流程解析

你好!这里是风筝的博客,
欢迎和我一起交流。

这里以Android N为例:

在了解AudioTrack之前,先从网上找了张图简单描述 AudioTrack、PlaybackThread、输出流设备三者的对应关系:
AudioFlinger
一般来说,输出流设备决定了它对应的 PlaybackThread 是什么类型,PlaybackThread 实例与输出流设备是一一对应的(OffloadThread 只会将音频数据输出到 compress_offload 设备中,MixerThread(with FastMixer) 只会将音频数据输出到 low_latency 设备中)

从 AudioTrack、PlaybackThread、输出流设备三者的关系图中,我们看到 AudioTrack 把音频流数据送入到对应的 PlaybackThread 中,那么应用进程想控制这些音频流的话,比如开始播放 start()、停止播放 stop()、暂停播放 pause(),怎么办呢?注意应用进程与 AudioFlinger 并不在一个进程上。这就需要 AudioFlinger 提供音频流管理功能,并提供一套通讯接口可以让应用进程跨进程控制 AudioFlinger 中的音频流状态。

AudioFlinger 音频流管理由 AudioFlinger::PlaybackThread::Track 实现,Track 与 AudioTrack 是一对一的关系,在应用程序中,每创建一个AudioTrack,在AudioFlinger这边,AudioFlinger 里某个PlaybackThread中就会创建一个Track与其对应;

PlaybackThread 与 AudioTrack/Track 是一对多的关系,一个 PlaybackThread 可以挂着多个 Track。Track与AudioTrack之间,通过共享内存传递音频数据,简单分为两种情况:
1.MODE_STATIC:数据一次性交付给对方完成所有数据的传递,简单高效。适用于铃声、系统提醒音等对内存要求小的播放操作。
2.MODE_STREAM:流模式和基于网络的音频流回放类似,音频数据严格按照要求多次不断地传递给接收方,直到结束。通常适用于音频文件较大时;音频属性要求高,如采样率高、深度大的数据。

  • AudioFlinger::PlaybackThread:回放线程基类,不同输出标识的音频流对应不同类型的PlaybackThread 实例
  • AudioFlinger::PlaybackThread::Track:音频流管理类,创建一块匿名共享内存用于 AudioTrack 与 AudioFlinger 之间的数据交换
  • AudioFlinger::TrackHandle:Track 对象只负责音频流管理业务,对外并没有提供跨进程的 Binder 调用接口,而应用进程又需要对音频流进行控制,所以需要一个对象来代理Track 的跨进程通讯,这个角色就是 TrackHandle,AudioTrack 通过它与 Track 交互
  • AudioTrack:Android 音频系统对外提供的一个 API 类,负责音频流数据输出;每个音频流对应着一个 AudioTrack 实例,不同输出标识的 AudioTrack 会匹配到不同的 AudioFlinger::PlaybackThread
  • AudioTrack::AudioTrackThread:数据传输模式为 TRANSFER_CALLBACK 时,需要创建该线程,它通过调用 audioCallback 回调函数主动从用户进程处索取数据并填充到 Buffer 上;数据传输模式为 TRANSFER_SYNC时,则不需要创建这个线程,因为用户进程会持续调用 AudioTrack.write() 填充数据到 Buffer;数据传输模式为 TRANSFER_SHARED 时,也不需要创建这个线程,因为用户进程会创建一块匿名共享内存,并把要播放的音频数据一次性拷贝到这块匿名共享内存上了

源码中有Audiotrack的应用范例,用于测试立体声左右声道最大音量:

//frameworks/base/media/tests/MediaFrameworkTest/src/com/android/mediaframeworktest/functional/audio/MediaAudioTrackTest.java
    public void testSetStereoVolumeMax() throws Exception {
        // constants for test
        final String TEST_NAME = "testSetStereoVolumeMax";
        final int TEST_SR = 22050;
        final int TEST_CONF = AudioFormat.CHANNEL_OUT_STEREO;
        final int TEST_FORMAT = AudioFormat.ENCODING_PCM_16BIT;
        final int TEST_MODE = AudioTrack.MODE_STREAM;
        final int TEST_STREAM_TYPE = AudioManager.STREAM_MUSIC;

        //-------- initialization --------------
        int minBuffSize = AudioTrack.getMinBufferSize(TEST_SR, TEST_CONF, TEST_FORMAT);
        AudioTrack track = new AudioTrack(TEST_STREAM_TYPE, TEST_SR, TEST_CONF, TEST_FORMAT,
                minBuffSize, TEST_MODE);
        byte data[] = new byte[minBuffSize/2];
        //--------    test        --------------
        track.write(data, 0, data.length);
        track.write(data, 0, data.length);
        track.play();
        float maxVol = AudioTrack.getMaxVolume();//获取最大音量值
        assertTrue(TEST_NAME, track.setStereoVolume(maxVol, maxVol) == AudioTrack.SUCCESS);
        //-------- tear down      --------------
        track.release();
    }

这个demo包含了AudioTrack的常规操作:
Step1:getMinBufferSize,计算最小的Buffer大小
Step2:创建一个audiotrack对象
Step3:write写入音频数据
Step4:play开始播放音频
Step5:release结束播放

//@AudioTrack.cpp
AudioTrack::AudioTrack(
        audio_stream_type_t streamType,
        uint32_t sampleRate,
        audio_format_t format,
        audio_channel_mask_t channelMask,
        size_t frameCount,
        audio_output_flags_t flags,
        callback_t cbf,
        void* user,
        int32_t notificationFrames,
        audio_session_t sessionId,
        transfer_type transferType,
        const audio_offload_info_t *offloadInfo,
        int uid,
        pid_t pid,
        const audio_attributes_t* pAttributes,
        bool doNotReconnect,
        float maxRequiredSpeed)
    : mStatus(NO_INIT),
      mState(STATE_STOPPED),
      mPreviousPriority(ANDROID_PRIORITY_NORMAL),
      mPreviousSchedulingGroup(SP_DEFAULT),
      mPausedPosition(0),
      mSelectedDeviceId(AUDIO_PORT_HANDLE_NONE)
{
    mStatus = set(streamType, sampleRate, format, channelMask,
            frameCount, flags, cbf, user, notificationFrames,
            0 /*sharedBuffer*/, false /*threadCanCallJava*/, sessionId, transferType,
            offloadInfo, uid, pid, pAttributes, doNotReconnect, maxRequiredSpeed);
}
status_t AudioTrack::set(
        audio_stream_type_t streamType,
        uint32_t sampleRate,
        audio_format_t format,
        audio_channel_mask_t channelMask,
        size_t frameCount,
        audio_output_flags_t flags,
        callback_t cbf,
        void* user,
        int32_t notificationFrames,
        const sp<IMemory>& sharedBuffer,
        bool threadCanCallJava,
        audio_session_t sessionId,
        transfer_type transferType,
        const audio_offload_info_t *offloadInfo,
        int uid,
        pid_t pid,
        const audio_attributes_t* pAttributes,
        bool doNotReconnect,
        float maxRequiredSpeed)
{
	//......
	// handle default values first.
    if (streamType == AUDIO_STREAM_DEFAULT) {
        streamType = AUDIO_STREAM_MUSIC;//default则默认为AUDIO_STREAM_MUSIC
    }
    if (pAttributes == NULL) {
    	//不会超过AUDIO_STREAM_PUBLIC_CNT,一共有13种类型可选择
        if (uint32_t(streamType) >= AUDIO_STREAM_PUBLIC_CNT) {
            ALOGE("Invalid stream type %d", streamType);
            return BAD_VALUE;
        }
        //mStreamType赋值,createTrack_l里会用到
        mStreamType = streamType;
    } else {
        // stream type shouldn't be looked at, this track has audio attributes
        memcpy(&mAttributes, pAttributes, sizeof(audio_attributes_t));
        //mStreamType赋值,createTrack_l里会用到
        mStreamType = AUDIO_STREAM_DEFAULT;
        if ((mAttributes.flags & AUDIO_FLAG_HW_AV_SYNC) != 0) {
            flags = (audio_output_flags_t)(flags | AUDIO_OUTPUT_FLAG_HW_AV_SYNC);
        }
        if ((mAttributes.flags & AUDIO_FLAG_LOW_LATENCY) != 0) {
            flags = (audio_output_flags_t) (flags | AUDIO_OUTPUT_FLAG_FAST);
        }
    }
	//......	
	//mFlags赋值,createTrack_l里会用到
    mOrigFlags = mFlags = flags;
    mCbf = cbf;
    if (cbf != NULL) {
        mAudioTrackThread = new AudioTrackThread(*this, threadCanCallJava);
        mAudioTrackThread->run("AudioTrack", ANDROID_PRIORITY_AUDIO, 0 /*stack*/);
        // thread begins in paused state, and will not reference us until start()
    }
    // create the IAudioTrack
    status_t status = createTrack_l();
}
  • 1.先判断streamType,对AudioTrack成员mStreamType赋值。
  • 2.对AudioTrack成员mFlags赋值,flags由AudioTrack构造时传入而来。
  • 3.如果 cbf(audioCallback 回调函数)非空,那么创建 AudioTrackThread 线程处理 audioCallback 回调函数(MODE_STREAM 模式时,cbf 为空);
  • 4.run,但是不会立即执行,暂时pause,直到start时执行(thread begins in paused state, and will not reference us until start());
  • 5.调用createTrack_l方法创建IAudioTrack;
status_t AudioTrack::createTrack_l()
{
	const sp<IAudioFlinger>& audioFlinger = AudioSystem::get_audio_flinger();

	status = AudioSystem::getOutputForAttr(attr, &output,
                                           mSessionId, &streamType, mClientUid,
                                           mSampleRate, mFormat, mChannelMask,
                                           mFlags, mSelectedDeviceId, mOffloadInfo);

    sp<IAudioTrack> track = audioFlinger->createTrack(streamType,
                                                      mSampleRate,
                                                      mFormat,
                                                      mChannelMask,
                                                      &temp,
                                                      &flags,
                                                      mSharedBuffer,
                                                      output,
                                                      mClientPid,
                                                      tid,
                                                      &mSessionId,
                                                      mClientUid,
                                                      &status);
	// update proxy
	//用于管理共享内存
    if (mSharedBuffer == 0) {//MODE_STREAM
        mStaticProxy.clear();
        mProxy = new AudioTrackClientProxy(cblk, buffers, frameCount, mFrameSize);
    } else {//MODE_STATIC
        mStaticProxy = new StaticAudioTrackClientProxy(cblk, buffers, frameCount, mFrameSize);
        mProxy = mStaticProxy;
    }

}
  • 1.获取audioFlinger
  • 2.getOutputForAttr根据AudioTrack传入的声音类型会设置一个属性,然后根据声音的属性确定他的组/类别,找到device,最后在根据device找到对应的output。 (一个device可能对应多个output(每个声卡对应一个output),但是只选择一个)
  • 3.创建AudioFlinger::PlaybackThread::Track。(在AudioTrack创建过程中,他会选择一个output,一个output对应一个播放设备,他也对应着一个PlaybackThread,应用程序的AudioTrack与PlaybackThread之中的Track是一一对应的)
  • 4.创建AudioTrackClientProxy/StaticAudioTrackClientProxy 管理共享内存。(APP的AudioTrack <==> Thread的Track 之间通过共享内存传递数据)

接下来的函数调用栈如下:

AudioTrack::createTrack_l
	AudioSystem::getOutputForAttr
		AudioPolicyService::getOutputForAttr
			AudioPolicyManager::getOutputForAttr
				AudioPolicyManager::getOutputForDevice
					AudioPolicyService::AudioPolicyClient::openOutput
						af->openOutput
	AudioFlinger::createTrack
		AudioFlinger::PlaybackThread::createTrack_l
			AudioFlinger::PlaybackThread::Track::Track
				AudioFlinger::ThreadBase::TrackBase::TrackBase
				new AudioTrackServerProxy/StaticAudioTrackServerProxy
		new TrackHandle
	new AudioTrackClientProxy/StaticAudioTrackClientProxy

看下AudioSystem::getOutputForAttr

//@AudioTrack.cpp
status_t AudioSystem::getOutputForAttr(const audio_attributes_t *attr,
                                        audio_io_handle_t *output,
                                        audio_session_t session,
                                        audio_stream_type_t *stream,
                                        uid_t uid,
                                        uint32_t samplingRate,
                                        audio_format_t format,
                                        audio_channel_mask_t channelMask,
                                        audio_output_flags_t flags,
                                        audio_port_handle_t selectedDeviceId,
                                        const audio_offload_info_t *offloadInfo)
{
    const sp<IAudioPolicyService>& aps = AudioSystem::get_audio_policy_service();
    if (aps == 0) return NO_INIT;
    return aps->getOutputForAttr(attr, output, session, stream, uid,
                                 samplingRate, format, channelMask,
                                 flags, selectedDeviceId, offloadInfo);
}
//@AudioPolicyInterfaceImpl.cpp
status_t AudioPolicyService::getOutputForAttr(const audio_attributes_t *attr,
                                              audio_io_handle_t *output,
                                              audio_session_t session,
                                              audio_stream_type_t *stream,
                                              uid_t uid,
                                              uint32_t samplingRate,
                                              audio_format_t format,
                                              audio_channel_mask_t channelMask,
                                              audio_output_flags_t flags,
                                              audio_port_handle_t selectedDeviceId,
                                              const audio_offload_info_t *offloadInfo)
{
    if (mAudioPolicyManager == NULL) {
        return NO_INIT;
    }
    ALOGV("getOutput()");
    Mutex::Autolock _l(mLock);

    const uid_t callingUid = IPCThreadState::self()->getCallingUid();
    return mAudioPolicyManager->getOutputForAttr(attr, output, session, stream, uid, samplingRate,
                                    format, channelMask, flags, selectedDeviceId, offloadInfo);
}
//@AudioPolicyManager.cpp
status_t AudioPolicyManager::getOutputForAttr(const audio_attributes_t *attr,
                                              audio_io_handle_t *output,
                                              audio_session_t session,
                                              audio_stream_type_t *stream,
                                              uid_t uid,
                                              uint32_t samplingRate,
                                              audio_format_t format,
                                              audio_channel_mask_t channelMask,
                                              audio_output_flags_t flags,
                                              audio_port_handle_t selectedDeviceId,
                                              const audio_offload_info_t *offloadInfo)
{
    audio_attributes_t attributes;
    if (attr != NULL) {
        if (!isValidAttributes(attr)) {
            ALOGE("getOutputForAttr() invalid attributes: usage=%d content=%d flags=0x%x tags=[%s]",
                  attr->usage, attr->content_type, attr->flags,
                  attr->tags);
            return BAD_VALUE;
        }
        attributes = *attr;
    } else {
        if (*stream < AUDIO_STREAM_MIN || *stream >= AUDIO_STREAM_PUBLIC_CNT) {
            ALOGE("getOutputForAttr():  invalid stream type");
            return BAD_VALUE;
        }
        stream_type_to_audio_attributes(*stream, &attributes);
    }
	//根据attributes获取策略strategy,即获取类别/组
	routing_strategy strategy = (routing_strategy) getStrategyForAttr(&attributes);
	//根据strategy获取device,即根据类别/组获取播放设备(耳机、蓝牙、外放喇叭)
    audio_devices_t device = getDeviceForStrategy(strategy, false /*fromCache*/);
	//哪些output上有对应当device
	*output = getOutputForDevice(device, session, *stream,
                                 samplingRate, format, channelMask,
                                 flags, offloadInfo);
}
//@AudioPolicyManager.cpp
audio_io_handle_t AudioPolicyManager::getOutputForDevice(
        audio_devices_t device,
        audio_session_t session,
        audio_stream_type_t stream,
        uint32_t samplingRate,
        audio_format_t format,
        audio_channel_mask_t channelMask,
        audio_output_flags_t flags,
        const audio_offload_info_t *offloadInfo)
{
	audio_io_handle_t output = AUDIO_IO_HANDLE_NONE;
	//......
	status = mpClientInterface->openOutput(profile->getModuleHandle(),
                                               &output,
                                               &config,
                                               &outputDesc->mDevice,
                                               address,
                                               &outputDesc->mLatency,
                                               outputDesc->mFlags);
	//......
	return output;
}
//@AudioPolicyClientImpl.cpp
status_t AudioPolicyService::AudioPolicyClient::openOutput(audio_module_handle_t module,
                                                           audio_io_handle_t *output,
                                                           audio_config_t *config,
                                                           audio_devices_t *devices,
                                                           const String8& address,
                                                           uint32_t *latencyMs,
                                                           audio_output_flags_t flags)
{
    sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();
    //af即AudioFlinger,这里即是AudioFlinger::openOutput
    return af->openOutput(module, output, config, devices, address, latencyMs, flags);
}

getOutputForAttr获取输出的output通路(output可以理解为hal层的音频通路的代表,有primary out、lowlatency out、offload、direct_pcm、a2dp output、usb_device output、dp output等),接下来AudioFlinger::openOutput的流程可以参考这篇文章:Android音频子系统(一)------openOutput打开流程

所以,最终getOutputForAttr会通过attr、streamType等参数,选择device拿到OutputFor(其实就是 AudioPolicyClient::openOutput会返回打开的设备)

sp<IAudioTrack> AudioFlinger::createTrack(
        audio_stream_type_t streamType,
        uint32_t sampleRate,
        audio_format_t format,
        audio_channel_mask_t channelMask,
        size_t *frameCount,
        audio_output_flags_t *flags,
        const sp<IMemory>& sharedBuffer,
        audio_io_handle_t output,
        pid_t pid,
        pid_t tid,
        audio_session_t *sessionId,
        int clientUid,
        status_t *status)
{
    sp<PlaybackThread::Track> track;
    sp<TrackHandle> trackHandle;
    sp<Client> client;

	PlaybackThread *thread = checkPlaybackThread_l(output);

	track = thread->createTrack_l(client, streamType, sampleRate, format,
                channelMask, frameCount, sharedBuffer, lSessionId, flags, tid, clientUid, &lStatus);

	// return handle to client
    trackHandle = new TrackHandle(track);
}
  • 1.使用checkPlaybackThread_l,根据audio_io_handle_t类型参数找到它对应的PlaybackThread(它们是对应的关系)
  • 2.调用PlaybackThread::createTrack_l,里面创建Track对象并添加到mTracks中。Track 构造时分配一块内存用于 AudioFlinger 与 AudioTrack 的数据交换(audio_track_cblk_t* mCblk为控制块),并创建一个 AudioTrackServerProxy /StaticAudioTrackServerProxy对象用来管理Buffer(PlaybackThread 将使用它从 Buffer 上取得可读数据的位置)
  • 3.创建一个Track 的通讯代理 TrackHandle并赋值给trackHandle

所以这里我们也可以看出:创建AudioTrack对象,就会导致某个PlaybackThread中创建一个Track对象,它们是对应关系。

//@Threads.cpp
sp<AudioFlinger::PlaybackThread::Track> AudioFlinger::PlaybackThread::createTrack_l(
        const sp<AudioFlinger::Client>& client,
        audio_stream_type_t streamType,
        uint32_t sampleRate,
        audio_format_t format,
        audio_channel_mask_t channelMask,
        size_t *pFrameCount,
        const sp<IMemory>& sharedBuffer,
        audio_session_t sessionId,
        audio_output_flags_t *flags,
        pid_t tid,
        int uid,
        status_t *status)
{
	//......
	track = new Track(this, client, streamType, sampleRate, format,
                          channelMask, frameCount, NULL, sharedBuffer,
                          sessionId, uid, *flags, TrackBase::TYPE_DEFAULT);
    //PlaybackThread中存在一个数组mTracks,其中包含一个或者多个Track
    //每一个Track都对应应用程序中创建的AudioTrack
	mTracks.add(track);
}
//@Tracks.cpp
AudioFlinger::PlaybackThread::Track::Track(
            PlaybackThread *thread,
            const sp<Client>& client,
            audio_stream_type_t streamType,
            uint32_t sampleRate,
            audio_format_t format,
            audio_channel_mask_t channelMask,
            size_t frameCount,
            void *buffer,
            const sp<IMemory>& sharedBuffer,
            audio_session_t sessionId,
            int uid,
            audio_output_flags_t flags,
            track_type type)
    :   TrackBase(thread, client, sampleRate, format, channelMask, frameCount,
                  (sharedBuffer != 0) ? sharedBuffer->pointer() : buffer,
                  sessionId, uid, true /*isOut*/,
                  (type == TYPE_PATCH) ? ( buffer == NULL ? ALLOC_LOCAL : ALLOC_NONE) : ALLOC_CBLK,
                  type),
    mFillingUpStatus(FS_INVALID),
    // mRetryCount initialized later when needed
        mSharedBuffer(sharedBuffer),
    mStreamType(streamType),
    mName(-1),  // see note below
    mMainBuffer(thread->mixBuffer()),
    mAuxBuffer(NULL),
    mAuxEffectId(0), mHasVolumeController(false),
    mPresentationCompleteFrames(0),
    mFrameMap(16 /* sink-frame-to-track-frame map memory */),
    // mSinkTimestamp
    mFastIndex(-1),
    mCachedVolume(1.0),
    mIsInvalid(false),
    mAudioTrackServerProxy(NULL),
    mResumeToStopping(false),
    mFlushHwPending(false),
    mFlags(flags)
{
	if (sharedBuffer == 0) {
        mAudioTrackServerProxy = new AudioTrackServerProxy(mCblk, mBuffer, frameCount,
                mFrameSize, !isExternalTrack(), sampleRate);
    } else {
        mAudioTrackServerProxy = new StaticAudioTrackServerProxy(mCblk, mBuffer, frameCount,
                mFrameSize);
    }
    mServerProxy = mAudioTrackServerProxy;
}
//@Tracks.cpp
AudioFlinger::ThreadBase::TrackBase::TrackBase(
            ThreadBase *thread,
            const sp<Client>& client,
            uint32_t sampleRate,
            audio_format_t format,
            audio_channel_mask_t channelMask,
            size_t frameCount,
            void *buffer,
            audio_session_t sessionId,
            int clientUid,
            bool isOut,
            alloc_type alloc,
            track_type type)
    :   RefBase(),
        mThread(thread),
        mClient(client),
        mCblk(NULL),
        // mBuffer
        mState(IDLE),
        mSampleRate(sampleRate),
        mFormat(format),
        mChannelMask(channelMask),
        mChannelCount(isOut ?
                audio_channel_count_from_out_mask(channelMask) :
                audio_channel_count_from_in_mask(channelMask)),
        mFrameSize(audio_has_proportional_frames(format) ?
                mChannelCount * audio_bytes_per_sample(format) : sizeof(int8_t)),
        mFrameCount(frameCount),
        mSessionId(sessionId),
        mIsOut(isOut),
        mServerProxy(NULL),
        mId(android_atomic_inc(&nextTrackId)),
        mTerminated(false),
        mType(type),
        mThreadIoHandle(thread->id())
{
    size_t size = sizeof(audio_track_cblk_t);
    size_t bufferSize = (buffer == NULL ? roundup(frameCount) : frameCount) * mFrameSize;
    /*如果buffer为空,并且alloc == ALLOC_CBLK*/
    if (buffer == NULL && alloc == ALLOC_CBLK) {
    	/*size为一个头部(起到控制作用),如果buffer为NULL,即应用程序没有分配,则大小增加bufferSize*/
        size += bufferSize;
    }

    if (client != 0) {
    	/*分配内存*/
        mCblkMemory = client->heap()->allocate(size);
    } else {
        // this syntax avoids calling the audio_track_cblk_t constructor twice
        mCblk = (audio_track_cblk_t *) new uint8_t[size];
        // assume mCblk != NULL
    }
    // construct the shared structure in-place.
    if (mCblk != NULL) {
        new(mCblk) audio_track_cblk_t();
        switch (alloc) {
        case ALLOC_READONLY: {
            const sp<MemoryDealer> roHeap(thread->readOnlyHeap());
            if (roHeap == 0 ||
                    (mBufferMemory = roHeap->allocate(bufferSize)) == 0 ||
                    (mBuffer = mBufferMemory->pointer()) == NULL) {
                mCblkMemory.clear();
                mBufferMemory.clear();
                return;
            }
            memset(mBuffer, 0, bufferSize);
            } break;
        case ALLOC_PIPE:
            mBufferMemory = thread->pipeMemory();
            // mBuffer is the virtual address as seen from current process (mediaserver),
            // and should normally be coming from mBufferMemory->pointer().
            // However in this case the TrackBase does not reference the buffer directly.
            // It should references the buffer via the pipe.
            // Therefore, to detect incorrect usage of the buffer, we set mBuffer to NULL.
            mBuffer = NULL;
            break;
        case ALLOC_CBLK:
            // clear all buffers
            if (buffer == NULL) {
                mBuffer = (char*)mCblk + sizeof(audio_track_cblk_t);
                memset(mBuffer, 0, bufferSize);
            } else {
                mBuffer = buffer;
            }
            break;
        case ALLOC_LOCAL:
            mBuffer = calloc(1, bufferSize);
            break;
        case ALLOC_NONE:
            mBuffer = buffer;
            break;
        }
}

上面Track构造函数中,如果应用程序是MODE_STREAM模式,那么会创建一个AudioTrackServerProxy用来管理Buffer,否则MODE_STATIC模式的话会创建StaticAudioTrackServerProxy,用来管理Buffer。

之前说过:APP创建AudioTrack,然后AudioFlinger::playbackThread创建对应的Track。他们之间通过共享内存传递数据:
Track用AudioTrackServerProxy/StaticAudioTrackServerProxy管理Buffer,相应的,AudioTrack在AudioTrack::set里使用AudioTrackClientProxy/StaticAudioTrackClientProxy管理Buffer。

Track中AudioTrackServerProxy/StaticAudioTrackServerProxy都继承于 ServerProxy, 它被用来管理共享内存, 里面含有obtainBuffer, releaseBuffer函数,playbackThread使用obtiainBuffer获得含有数据的内存,消耗数据之后使用releaseBuffer释放

//@AudioTrackShared.h
// Proxy used by AudioFlinger server
class ServerProxy : public Proxy {
	virtual status_t    obtainBuffer(Buffer* buffer, bool ackFlush = false);
	virtual void        releaseBuffer(Buffer* buffer);
}

Android实在太庞大,流程太多了。。。。。。

总的来说,AudioTrack里可以简单分为三个步骤:
1 使用AudioTrack的属性, 根据AudioPolicy找到对应的output、playbackThread
2 在playbackThread中创建对应的track
3 APP的AudioTrack 和 playbackThread的mTracks中的track之间建立共享内存

  • 4
    点赞
  • 25
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值