allwinner音频控制流程:
hal层的so库文件在device/softwinner/common/hardware/audio中编译生成,该路径下的audio_hw.c对上主要实现了android hal层so库的标准接口供audiofliger调用,对下主要通
过调用android标准的tinymix接口来控制底层驱动,从而实现音量控制,音频通路的切换等,tinymix驱动路径在external/tinyalsa中,它会编译生成tinyalsa可执行文件和
libtinyalsa.so库文件,其中库文件可以用来在终端命令行直接控制底层音频,而so库供提供库函数和audio_hw.c一起编译,从而实现通过audio_hw.c调用。
先从上层常用的接口讲起,这样便于理解,否则看完底层,其实也不知道到底怎么用。如应用层常用到的AudioSystem.setParameters("routing=8192");这表示设置当前音频通道的输出为那一路,看看它是如何从上层一路控制底层硬件输出的
通过aidl调用frameworks/base/media/java/android/media/AudioSystem.java的setParameters:
public static native int setParameters(String keyValuePairs);
这里又调用JNI的方法,在core/jni/android_media_AudioSystem.cpp 中:
79 static int
80 android_media_AudioSystem_setParameters(JNIEnv *env, jobject thiz, jstring keyValuePairs)
81 {
82 const jchar* c_keyValuePairs = env->GetStringCritical(keyValuePairs, 0);
83 String8 c_keyValuePairs8;
84 if (keyValuePairs) {
85 c_keyValuePairs8 = String8(c_keyValuePairs, env->GetStringLength(keyValuePairs));
86 env->ReleaseStringCritical(keyValuePairs, c_keyValuePairs);
87 }
88 int status = check_AudioSystem_Command(AudioSystem::setParameters(0, c_keyValuePairs8));
89 return status;
90 }
88行调用media/libmedia/AudioSystem.cpp方法:
167 status_t AudioSystem::setParameters(audio_io_handle_t ioHandle, const String8& keyValuePairs) {
168 const sp<IAudioFlinger>& af = AudioSystem::get_audio_flinger();
169 if (af == 0) return PERMISSION_DENIED;
170 return af->setParameters(ioHandle, keyValuePairs);
171 }
710行,调用了AudioFlinger.cpp方法:
747 if (ioHandle == 0) {
748 AutoMutex lock(mHardwareLock);
749 mHardwareStatus = AUDIO_SET_PARAMETER;
750 status_t final_result = NO_ERROR;
751 for (size_t i = 0; i < mAudioHwDevs.size(); i++) {
752 audio_hw_device_t *dev = mAudioHwDevs[i];
753 result = dev->set_parameters(dev, keyValuePairs.string());
754 final_result = result ?: final_result;
755 }
753行,这里最终调用了hal层的set_parameters,所以进入device/softwinner/common/hardware/audio中:
adev->hw_device.set_parameters = adev_set_parameters;
----->
到这里,最后会通过str_parms_create_str将会把值放到哈系表中去,用str_parms_get_str可以将值取出来,供HAL层判断当前的输出设备为那一个
HAL层的音频库一般会编译成为audio.primary.default.so audio.primary.exDroid.so这两个库,其中exDroid为$(TARGET_BOARD_PLATFORM),即自己目标平台的名字,那我们的
android系统到底加载其中的那一个呢,这就要看hardware/libhardware/hardware.c中的hw_get_module_by_class函数了,这个函数会遍历一下数组,如果找不到,才会用default的:
45 static const char *variant_keys[] = {
46 "ro.hardware", /* This goes first so that it can pick up a different
47 file on the emulator. */
48 "ro.product.board",
49 "ro.board.platform",
50 "ro.arch"
51 };
我们看到ro.product.board的属性就是$(TARGET_BOARD_PLATFORM),所以加载的是自己平台的so库,即audio.primary.exDroid.so
再来看看audioflinger.cpp中一些常用的函数,播放声音时候首先创建播放线程,调用:
6753 audio_io_handle_t AudioFlinger::openOutput(audio_module_handle_t module,
6754 audio_devices_t *pDevices,
6755 uint32_t *pSamplingRate,
6756 audio_format_t *pFormat,
6757 audio_channel_mask_t *pChannelMask,
6758 uint32_t *pLatencyMs,
6759 audio_output_flags_t flags)
6760 {
....................................................................................
6785 outHwDev = findSuitableHwDev_l(module, *pDevices);
6786 if (outHwDev == NULL)
6787 return 0;
6788
6789 audio_io_handle_t id = nextUniqueId();
6790
6791 mHardwareStatus = AUDIO_HW_OUTPUT_OPEN;
6792
6793 status = outHwDev->open_output_stream(outHwDev,
6794 id,
6795 *pDevices,
6796 (audio_output_flags_t)flags,
6797 &config,
6798 &outStream);
6799
6800 mHardwareStatus = AUDIO_HW_IDLE;
..............................................................................................
6808 if (status == NO_ERROR && outStream != NULL) {
6809 AudioStreamOut *output = new AudioStreamOut(outHwDev, outStream);
6810
6811 if ((flags & AUDIO_OUTPUT_FLAG_DIRECT) ||
6812 (config.format != AUDIO_FORMAT_PCM_16_BIT) ||
6813 (config.channel_mask != AUDIO_CHANNEL_OUT_STEREO)) {
6814 thread = new DirectOutputThread(this, output, id, *pDevices);
6815 ALOGV("openOutput() created direct output: ID %d thread %p", id, thread);
6816 } else {
6817 thread = new MixerThread(this, output, id, *pDevices);
6818 ALOGV("openOutput() created mixer output: ID %d thread %p", id, thread);
6819 }
6820 mPlaybackThreads.add(id, thread);
这里主要是打开硬件设备,设置一些硬件的默认参数,入音量等,然后根据flags标记创建DirectOutputThread或者MixerThread,我们看他在AudioFlinger.h的定义:
class DirectOutputThread : public PlaybackThread {.................}
而PlaybackThread继承关系:
class PlaybackThread : public ThreadBase {...................}
可见他们都是PlaybackThread的子类,然后在6820行,将该thread添加到mPlaybackThreads中,mPlaybackThreads是一个vetor,它以id作为索引,将该线程保存起来,并返回给调用
者,后续播放声音时候通过传进该id(也就是audio_io_handle_t),从该vetor取就可以了。
什么时候开始运行这个线程呢,它是在创建线程时候就启动了,看如下函数就知道了:
1652 void AudioFlinger::PlaybackThread::onFirstRef()
1653 {
1654 run(mName, ANDROID_PRIORITY_URGENT_AUDIO);
1655 }
上面函数是播放时候调用,如果录音则流程一样相似,调用的是openInput:
6970 audio_io_handle_t AudioFlinger::openInput(audio_module_handle_t module,
6971 audio_devices_t *pDevices,
6972 uint32_t *pSamplingRate,
6973 audio_format_t *pFormat,
6974 uint32_t *pChannelMask)
6975 {
...................................................................
6995 inHwDev = findSuitableHwDev_l(module, *pDevices);
6996 if (inHwDev == NULL)
6997 return 0;
6998
6999 audio_io_handle_t id = nextUniqueId();
7000
7001 status = inHwDev->open_input_stream(inHwDev, id, *pDevices, &config,
7002 &inStream);
..................................................................................
7022 if (status == NO_ERROR && inStream != NULL) {
7023 AudioStreamIn *input = new AudioStreamIn(inHwDev, inStream);
7024
7025 // Start record thread
7026 // RecorThread require both input and output device indication to forward to audio
7027 // pre processing modules
7028 uint32_t device = (*pDevices) | primaryOutputDevice_l();
7029 thread = new RecordThread(this,
7030 input,
7031 reqSamplingRate,
7032 reqChannels,
7033 id,
7034 device);
7035 mRecordThreads.add(id, thread);
7036 ALOGV("openInput() created record thread: ID %d thread %p", id, thread);
7037 if (pSamplingRate != NULL) *pSamplingRate = reqSamplingRate;
7038 if (pFormat != NULL) *pFormat = config.format;
7039 if (pChannelMask != NULL) *pChannelMask = reqChannels;
7040
7041 input->stream->common.standby(&input->stream->common);
7042
7043 // notify client processes of the new input creation
7044 thread->audioConfigChanged_l(AudioSystem::INPUT_OPENED);
7045 return id;
7046 }
这里7029行的 RecordThread继承关系:
class RecordThread : public ThreadBase, public AudioBufferProvider
接着开始播放声音,调用的是createTrack:
438 sp<IAudioTrack> AudioFlinger::createTrack(
439 pid_t pid,
440 audio_stream_type_t streamType,
441 uint32_t sampleRate,
442 audio_format_t format,
443 uint32_t channelMask,
444 int frameCount,
445 IAudioFlinger::track_flags_t flags,
446 const sp<IMemory>& sharedBuffer,
447 audio_io_handle_t output,
448 pid_t tid,
449 int *sessionId,
450 status_t *status)
451 {
466 {
467 Mutex::Autolock _l(mLock);
468 PlaybackThread *thread = checkPlaybackThread_l(output);
469 PlaybackThread *effectThread = NULL;
470 if (thread == NULL) {
471 ALOGE("unknown output thread");
472 lStatus = BAD_VALUE;
473 goto Exit;
474 }
475
476 client = registerPid_l(pid);
502 track = thread->createTrack_l(client, streamType, sampleRate, format,
503 channelMask, frameCount, sharedBuffer, lSessionId, flags, tid, &lStatus);
504
505 // move effect chain to this output thread if an effect on same session was waiting
506 // for a track to be created
507 if (lStatus == NO_ERROR && effectThread != NULL) {
508 Mutex::Autolock _dl(thread->mLock);
509 Mutex::Autolock _sl(effectThread->mLock);
510 moveEffectChain_l(lSessionId, effectThread, thread, true);
511 }
512
513 // Look for sync events awaiting for a session to be used.
514 for (int i = 0; i < (int)mPendingSyncEvents.size(); i++) {
515 if (mPendingSyncEvents[i]->triggerSession() == lSessionId) {
516 if (thread->isValidSyncEvent(mPendingSyncEvents[i])) {
517 if (lStatus == NO_ERROR) {
518 track->setSyncEvent(mPendingSyncEvents[i]);
519 } else {
520 mPendingSyncEvents[i]->cancel();
521 }
522 mPendingSyncEvents.removeAt(i);
523 i--;
524 }
525 }
526 }
528 if (lStatus == NO_ERROR) {
529 trackHandle = new TrackHandle(track);
530 } else {
531 // remove local strong reference to Client before deleting the Track so that the Client
532 // destructor is called by the TrackBase destructor with mLock held
533 client.clear();
534 track.clear();
535 }
536
537 Exit:
538 if (status != NULL) {
539 *status = lStatus;
540 }
541 return trackHandle;
476行的函数:
422 sp<AudioFlinger::Client> AudioFlinger::registerPid_l(pid_t pid)
423 {
424 // If pid is already in the mClients wp<> map, then use that entry
425 // (for which promote() is always != 0), otherwise create a new entry and Client.
426 sp<Client> client = mClients.valueFor(pid).promote();
427 if (client == 0) {
428 client = new Client(this, pid);
429 mClients.add(pid, client);
430 }
431
432 return client;
433 }
我们第一次进来,client为null,所以进入428行:
5685 AudioFlinger::Client::Client(const sp<AudioFlinger>& audioFlinger, pid_t pid)
5686 : RefBase(),
5687 mAudioFlinger(audioFlinger),
5688 // FIXME should be a "k" constant not hard-coded, in .h or ro. property, see 4 lines below
5689 mMemoryDealer(new MemoryDealer(1024*1024, "AudioFlinger::Client")),
5690 mPid(pid),
5691 mTimedTrackCount(0)
5692 {
5693 // 1 MB of address space is good for 32 tracks, 8 buffers each, 4 KB/buffer
5694 }
申请了一块内存啊,接着往下。
这里型参中传进来的output参数就是前面加入vetor的id,通过468行checkPlaybackThread_l函数将前面的thread取出来,接着502行,创建PlaybackThread::Track类,从中可以看到
一个线程可以有多个track,对应着不同的音频,比如,在统一个进程中,我们可以边播电影边听音乐,同时有两个track输出。进入看看该函数:
1658 sp<AudioFlinger::PlaybackThread::Track> AudioFlinger::PlaybackThread::createTrack_l(
1659 const sp<AudioFlinger::Client>& client,
1660 audio_stream_type_t streamType,
1661 uint32_t sampleRate,
1662 audio_format_t format,
1663 uint32_t channelMask,
1664 int frameCount,
1665 const sp<IMemory>& sharedBuffer,
1666 int sessionId,
1667 IAudioFlinger::track_flags_t flags,
1668 pid_t tid,
1669 status_t *status)
...................................................................................
1759 lStatus = initCheck();
1760 if (lStatus != NO_ERROR) {
1761 ALOGE("Audio driver not initialized.");
1762 goto Exit;
1763 }
1764
1765 { // scope for mLock
1766 Mutex::Autolock _l(mLock);
1767
1768 // all tracks in same audio session must share the same routing strategy otherwise
1769 // conflicts will happen when tracks are moved from one output to another by audio policy
1770 // manager
1771 uint32_t strategy = AudioSystem::getStrategyForStream(streamType);
1772 for (size_t i = 0; i < mTracks.size(); ++i) {
1773 sp<Track> t = mTracks[i];
1774 if (t != 0 && !t->isOutputTrack()) {
1775 uint32_t actual = AudioSystem::getStrategyForStream(t->streamType());
1776 if (sessionId == t->sessionId() && strategy != actual) {
1777 ALOGE("createTrack_l() mismatched strategy; expected %u but found %u",
1778 strategy, actual);
1779 lStatus = BAD_VALUE;
1780 goto Exit;
1781 }
1782 }
1783 }
1784
1785 if (!isTimed) {
1786 track = new Track(this, client, streamType, sampleRate, format,
1787 channelMask, frameCount, sharedBuffer, sessionId, flags);
1788 } else {
1789 track = TimedTrack::create(this, client, streamType, sampleRate, format,
1790 channelMask, frameCount, sharedBuffer, sessionId);
1791 }
1796 mTracks.add(track);
1789行创建track实例,1796行将实例加入mTracks,mTracks也是一个vetor。
然后回到createTrack中,529行创建刚new出来的track的handle,并返回给调用者,看看这里的TrackHandle继承关系:
class TrackHandle : public android::BnAudioTrack { ............. }
可以看到TrackHandle继承自BnAudioTrack,createTrack的调用者可以通过IAudioTrack接口与AudioFlinger中对应的Track实例交互,这里又不得不说一下binder间的通讯机制了,我
们在audiotrack中调用audioflinger,本来就是跨进程调用的,这里服务端返回给audiotrack进程的是trackHandle,那是不是这就接把这个指针地址复制回给audiotrack就行了呢?
当然不行,因为这个地址在audiotrack的进程是无效的,两个进程的地址是独立的,那么这就要在binder驱动中做转换了,在binder驱动中会返回一个引用给audiotrack,我们看
IAudioFlinger.cpp中的继承关系:
class BnAudioTrack : public BnInterface<IAudioTrack>
template<typename INTERFACE>
class BnInterface : public INTERFACE, public BBinder
class BBinder : public IBinder
class IBinder : public virtual RefBase
class IAudioTrack : public IInterface
class IInterface : public virtual RefBase
所以有如下关系:
BnAudioTrack -- BnInterface -- IAudioTrack -- IInterface -- RefBase
|
---- BBinder -- IBinder -- RefBase
IAudioFlinger.cpp的BpAudioFlinger实现了客户端的函数接口:
virtual sp<IAudioTrack> createTrack(
...................
status_t lStatus = remote()->transact(CREATE_TRACK, data, &reply);
125 track = interface_cast<IAudioTrack>(reply.readStrongBinder());
126 }
127 if (status) {
128 *status = lStatus;
129 }
130 return track;
131 }
而进程另一端的IAudioFlinger.cpp的BnAudioFlinger::onTransact中:
709 sp<IAudioTrack> track = createTrack(pid,
710 (audio_stream_type_t) streamType, sampleRate, format,
711 channelCount, bufferCount, flags, buffer, output, tid, &sessionId, &status);
712 reply->writeInt32(sessionId);
713 reply->writeInt32(status);
714 reply->writeStrongBinder(track->asBinder());
715 return NO_ERROR;
这里的remote()->transact跨进程调用了service端的onTransact的createTrack,这里的createTrack函数实现就是AudioFlinger.cpp的createTrack。
track->asBinder是什么意思呢?看看asBinder的定义:
IInterface.cpp:
30 sp<IBinder> IInterface::asBinder()
31 {
32 return this ? onAsBinder() : NULL;
33 }
IInterface.h:
141 inline IBinder* BpInterface<INTERFACE>::onAsBinder()
142 {
143 return remote();
144 }
和
128 template<typename INTERFACE>
129 IBinder* BnInterface<INTERFACE>::onAsBinder()
130 {
131 return this;
132 }
Binder.h:
inline IBinder* remote() { return mRemote; }
IBinder* const mRemote;
我们这里是BnInterface,所以返回的是return this;返回的还是track对象(直接写writeStrongBinder(track)不就完了吗?我觉得是可以的,因为其他地方都是这样写的)。
接着audiotrack端interface_cast<IAudioTrack>模板展开后就是new BpAudioTrack(reply.readStrongBinder()):
Parcel.cpp:
1040 sp<IBinder> Parcel::readStrongBinder() const
1041 {
1042 sp<IBinder> val;
1043 unflatten_binder(ProcessState::self(), *this, &val);
1044 return val;
1045 }
函数unflatten_binder:
236 status_t unflatten_binder(const sp<ProcessState>& proc,
237 const Parcel& in, sp<IBinder>* out)
238 {
239 const flat_binder_object* flat = in.readObject(false);
240
241 if (flat) {
242 switch (flat->type) {
243 case BINDER_TYPE_BINDER:
244 *out = static_cast<IBinder*>(flat->cookie);
245 return finish_unflatten_binder(NULL, *flat, in);
246 case BINDER_TYPE_HANDLE:
247 *out = proc->getStrongProxyForHandle(flat->handle);
248 return finish_unflatten_binder(
249 static_cast<BpBinder*>(out->get()), *flat, in);
250 }
251 }
252 return BAD_TYPE;
253 }
所以变成ProcessState::self()->getStrongProxyForHandle,这里是全局的静态变量:
74 sp<ProcessState> ProcessState::self()
75 {
76 Mutex::Autolock _l(gProcessMutex);
77 if (gProcess != NULL) {
78 return gProcess;
79 }
80 gProcess = new ProcessState;
81 return gProcess;
82 }
可以看到,是一个单例模式,它在哪里被打开过了呢,是在audiotrack的进程在之前获得audioflinger服务时候打开的,说明一个进程只有一块内存映射空间,所以就直接返回了有了内
存空间,又有了new BpAudioTrack(),而且有了远程BnAudioTrack这个binder在本地的引用reply.readStrongBinder(),就可以和远程通讯啦。而且我们看到audiotrack这个binder
并没有把自己添加到servicemanager中去,所以它是个匿名binder哦
为了便于理解上面的audioflinger和audiotrack之间的关系,引用网上一位大牛的话:
"Trackhandle是AT端调用AF的CreateTrack得到的一个基于Binder机制的Track。
这个TrackHandle实际上是对真正干活的PlaybackThread::Track的一个跨进程支持的封装。
什么意思?本来PlaybackThread::Track是真正在AF中干活的东西,不过为了支持跨进程的话,我们用TrackHandle对其进行了一下包转。这样在AudioTrack调用TrackHandle
的功能,实际都由TrackHandle调用PlaybackThread::Track来完成了。可以认为是一种Proxy模式吧。
这个就是AudioFlinger异常复杂的一个原因!!!"
track提供了很多控制音频的方法给调用者:
4373 /*static*/ void AudioFlinger::PlaybackThread::Track::appendDumpHeader(String8& result)
4379 void AudioFlinger::PlaybackThread::Track::dump(char* buffer, size_t size)
4464 status_t AudioFlinger::PlaybackThread::Track::getNextBuffer(
AudioBufferProvider::Buffer* buffer, int64_t pts)
4522 size_t AudioFlinger::PlaybackThread::Track::framesReady()
4527 bool AudioFlinger::PlaybackThread::Track::isReady()
4539 status_t AudioFlinger::PlaybackThread::Track::start(AudioSystem::sync_event_t event,
int triggerSession)
4585 void AudioFlinger::PlaybackThread::Track::stop()
4620 void AudioFlinger::PlaybackThread::Track::pause()
4643 void AudioFlinger::PlaybackThread::Track::flush()
4667 void AudioFlinger::PlaybackThread::Track::reset()
4685 void AudioFlinger::PlaybackThread::Track::mute(bool muted)
4690 status_t AudioFlinger::PlaybackThread::Track::attachAuxEffect(int EffectId)
4739 void AudioFlinger::PlaybackThread::Track::setAuxBuffer(int EffectId, int32_t *buffer)
4745 bool AudioFlinger::PlaybackThread::Track::presentationComplete(size_t framesWritten,
4765 void AudioFlinger::PlaybackThread::Track::triggerEvents(AudioSystem::sync_event_t type)
4778 uint32_t AudioFlinger::PlaybackThread::Track::getVolumeLR()
4803 status_t AudioFlinger::PlaybackThread::Track::setSyncEvent(const sp<SyncEvent>& event)
等等
而录音的流程和上面也相似,调用的函数如下:
5836 sp<IAudioRecord> AudioFlinger::openRecord(
5837 pid_t pid,
5838 audio_io_handle_t input,
5839 uint32_t sampleRate,
5840 audio_format_t format,
5841 uint32_t channelMask,
5842 int frameCount,
5843 IAudioFlinger::track_flags_t flags,
5844 int *sessionId,
5845 status_t *status)
5846 {
5864 thread = checkRecordThread_l(input);
5865 if (thread == NULL) {
5866 lStatus = BAD_VALUE;
5867 goto Exit;
5868 }
5869
5870 client = registerPid_l(pid);
5881 // create new record track. The record track uses one track in mHardwareMixerThread by convention.
5882 recordTrack = thread->createRecordTrack_l(client,
5883 sampleRate,
5884 format,
5885 channelMask,
5886 frameCount,
5887 lSessionId,
5888 &lStatus);
5889 }
5890 if (lStatus != NO_ERROR) {
5891 // remove local strong reference to Client before deleting the RecordTrack so that the Client
5892 // destructor is called by the TrackBase destructor with mLock held
5893 client.clear();
5894 recordTrack.clear();
5895 goto Exit;
5896 }
5897
5898 // return to handle to client
5899 recordHandle = new RecordHandle(recordTrack);
5900 lStatus = NO_ERROR;
5901
5902 Exit:
5903 if (status) {
5904 *status = lStatus;
5905 }
5906 return recordHandle;
5907 }
同样,recordtarck也提供了方法给调用者使用:
5367 status_t AudioFlinger::RecordThread::RecordTrack::getNextBuffer(AudioBufferProvider::Buffer* buffer, int64_t pts)
5406 status_t AudioFlinger::RecordThread::RecordTrack::start(AudioSystem::sync_event_t event,
int triggerSession)
5418 void AudioFlinger::RecordThread::RecordTrack::stop()
5431 void AudioFlinger::RecordThread::RecordTrack::dump(char* buffer, size_t size)
而AudioFlinger还有另外一个要注意的函数,openDuplicateOutput,他是打开两个硬件设备时候调用,比如,同时打开喇叭和蓝牙:
6879 audio_io_handle_t AudioFlinger::openDuplicateOutput(audio_io_handle_t output1,
6880 audio_io_handle_t output2)
6881 {
6882 Mutex::Autolock _l(mLock);
6883 MixerThread *thread1 = checkMixerThread_l(output1);
6884 MixerThread *thread2 = checkMixerThread_l(output2);
6885
6886 if (thread1 == NULL || thread2 == NULL) {
6887 ALOGW("openDuplicateOutput() wrong output mixer type for output %d or %d", output1, output2);
6888 return 0;
6889 }
6890
6891 audio_io_handle_t id = nextUniqueId();
6892 DuplicatingThread *thread = new DuplicatingThread(this, thread1, id);
6893 thread->addOutputTrack(thread2);
6894 mPlaybackThreads.add(id, thread);
6895 // notify client processes of the new output creation
6896 thread->audioConfigChanged_l(AudioSystem::OUTPUT_OPENED);
6897 return id;
6898 }
接着从frameworks/av/media/libmedia/ToneGenerator.cpp来看看如何使用audiotrack,看看它是如何和AudioFlinger打交道的,看初始化函数:
1011 bool ToneGenerator::initAudioTrack() {
1012
1013 if (mpAudioTrack) {
1014 delete mpAudioTrack;
1015 mpAudioTrack = NULL;
1016 }
1017
1018 // Open audio track in mono, PCM 16bit, default sampling rate, default buffer size
1019 mpAudioTrack = new AudioTrack();
1020 ALOGV("Create Track: %p", mpAudioTrack);
1021
1022 mpAudioTrack->set(mStreamType,
1023 0, // sampleRate
1024 AUDIO_FORMAT_PCM_16_BIT,
1025 AUDIO_CHANNEL_OUT_MONO,
1026 0, // frameCount
1027 AUDIO_OUTPUT_FLAG_FAST,
1028 audioCallback,
1029 this, // user
1030 0, // notificationFrames
1031 0, // sharedBuffer
1032 mThreadCanCallJava);
1033
1034 if (mpAudioTrack->initCheck() != NO_ERROR) {
1035 ALOGE("AudioTrack->initCheck failed");
1036 goto initAudioTrack_exit;
1037 }
1038
1039 mpAudioTrack->setVolume(mVolume, mVolume);
1040
1041 mState = TONE_INIT;
1042
1043 return true;
1019行new AudioTrack(),代码在frameworks/av/media/libmedia/AudioTrack.cpp中:
89 AudioTrack::AudioTrack()
90 : mStatus(NO_INIT),
91 mIsTimed(false),
92 mPreviousPriority(ANDROID_PRIORITY_NORMAL),
93 mPreviousSchedulingGroup(SP_DEFAULT)
94 {
95 }
这里好像没有做什么事情,继续往下看1022行mpAudioTrack->set:
180 status_t AudioTrack::set(
181 audio_stream_type_t streamType,
182 uint32_t sampleRate,
183 audio_format_t format,
184 int channelMask,
185 int frameCount,
186 audio_output_flags_t flags,
187 callback_t cbf,
188 void* user,
189 int notificationFrames,
190 const sp<IMemory>& sharedBuffer,
191 bool threadCanCallJava,
192 int sessionId)
193 {
255 audio_io_handle_t output = AudioSystem::getOutput(
256 streamType,
257 sampleRate, format, channelMask,
258 flags);
259
260 if (output == 0) {
261 ALOGE("Could not get audio output for stream type %d", streamType);
262 return BAD_VALUE;
263 }
275 if (cbf != NULL) {
276 mAudioTrackThread = new AudioTrackThread(*this, threadCanCallJava);
277 mAudioTrackThread->run("AudioTrack", ANDROID_PRIORITY_AUDIO, 0 /*stack*/);
278 }
279
280 // create the IAudioTrack
281 status_t status = createTrack_l(streamType,
282 sampleRate,
283 format,
284 (uint32_t)channelMask,
285 frameCount,
286 flags,
287 sharedBuffer,
288 output);
289
290 if (status != NO_ERROR) {
291 if (mAudioTrackThread != 0) {
292 mAudioTrackThread->requestExit();
293 mAudioTrackThread.clear();
294 }
295 return status;
296 }
这里255行的getOutput经过hardware/libhardware_legacy/audio目录下的音频策略,决定那个输出设备,并最终会调用上面AudioFlinger.cpp中的AudioFlinger::openOutput来打开
输出设备和创建一个MixerThread(或者DirectOutputThread)线程,并且会返回一个audio_io_handle_t给output,这个前面已经说过,这是该线程独有的id,后面通过这个id从vetor就
可以找到对应的线程了,接着往下看276行new AudioTrackThread,AudioTrackThread的继承关系如下:
459 /* a small internal class to handle the callback */
460 class AudioTrackThread : public Thread
461 {
462 public:
463 AudioTrackThread(AudioTrack& receiver, bool bCanCallJava = false);
464
465 // Do not call Thread::requestExitAndWait() without first calling requestExit().
466 // Thread::requestExitAndWait() is not virtual, and the implementation doesn't do enough.
467 virtual void requestExit();
468
469 void pause(); // suspend thread from execution at next loop boundary
470 void resume(); // allow thread to execute, if not requested to exit
471
472 private:
可见是一个线程类,调用run后将会执行子线程threadLoop,看看构造函数:
1450 AudioTrack::AudioTrackThread::AudioTrackThread(AudioTrack& receiver, bool bCanCallJava)
1451 : Thread(bCanCallJava), mReceiver(receiver), mPaused(true)
1452 {
1453 }
把当前类AudioTrack赋值给mReceiver,接着就run方法,开始执行线程(普通java线程是调用start方法后启动线程哦):
1459 bool AudioTrack::AudioTrackThread::threadLoop()
1460 {
1461 {
1462 AutoMutex _l(mMyLock);
1463 if (mPaused) {
1464 mMyCond.wait(mMyLock);
1465 // caller will check for exitPending()
1466 return true;
1467 }
1468 }
1469 if (!mReceiver.processAudioBuffer(this)) {
1470 pause();
1471 }
1472 return true;
1473 }
可以看到,主要是1469的函数processAudioBuffer~~~~~~~~~~~~~~~(回头再分析这里)
分析完线程启动后接着往下,继续分析AudioTrack.cpp中set函数,接着到了281行createTrack_l:
743 status_t AudioTrack::createTrack_l(
744 audio_stream_type_t streamType,
745 uint32_t sampleRate,
746 audio_format_t format,
747 uint32_t channelMask,
748 int frameCount,
749 audio_output_flags_t flags,
750 const sp<IMemory>& sharedBuffer,
751 audio_io_handle_t output)
752 {
................函数很长啊,前面的主要是计算采样频率,buffer大小等等.......................
873 sp<IAudioTrack> track = audioFlinger->createTrack(getpid(),
874 streamType,
875 sampleRate,
876 format,
877 channelMask,
878 frameCount,
879 trackFlags,
880 sharedBuffer,
881 output,
882 tid,
883 &mSessionId,
884 &status);
885
886 if (track == 0) {
887 ALOGE("AudioFlinger could not create track, status: %d", status);
888 return status;
889 }
890 sp<IMemory> cblk = track->getCblk();
891 if (cblk == 0) {
892 ALOGE("Could not get control block");
893 return NO_INIT;
894 }
895 mAudioTrack = track;
896 mCblkMemory = cblk;
897 mCblk = static_cast<audio_track_cblk_t*>(cblk->pointer());
898 // old has the previous value of mCblk->flags before the "or" operation
899 int32_t old = android_atomic_or(CBLK_DIRECTION_OUT, &mCblk->flags);
..................................................
来看873行,调用了audioFlinger的createTrack,这个前面已经分析过了啊,主要是常见了tarck,并且返回控制该track的handle。
接着890的getCblk
5767 sp<IMemory> AudioFlinger::TrackHandle::getCblk() const {
5768 return mTrack->getCblk();
5769 }
这里的mTrack,看AudioFlinger的TrackHandle构造函数:
5753 AudioFlinger::TrackHandle::TrackHandle(const sp<AudioFlinger::PlaybackThread::Track>& track)
5754 : BnAudioTrack(),
5755 mTrack(track)
5756 {
5757 }
可以看到mTrack已经赋值track,也就是调用track的getCblk,而且这里的track是PlaybackThread子类的,所以又看Track的构造函数:
4273 AudioFlinger::PlaybackThread::Track::Track(
4274 PlaybackThread *thread,
4275 const sp<Client>& client,
4276 audio_stream_type_t streamType,
4277 uint32_t sampleRate,
4278 audio_format_t format,
4279 uint32_t channelMask,
4280 int frameCount,
4281 const sp<IMemory>& sharedBuffer,
4282 int sessionId,
4283 IAudioFlinger::track_flags_t flags)
4284 : TrackBase(thread, client, sampleRate, format, channelMask, frameCount, sharedBuffer, sessionId),
可以看出track继承了TrackBase,所以看TrackBase的类定义:
343 class ThreadBase : public Thread {
..............................
sp<IMemory> getCblk() const { return mCblkMemory; }
audio_track_cblk_t* cblk() const { return mCblk; }
..............................
sp<IMemory> mCblkMemory;
.............................
}
可以看到getCblk函数了,就是返回IMemory,IMemory是匿名共享内存的一块地址,看看IMemory的初始化地方:
4165 } else {
4166 mCblk = (audio_track_cblk_t *)(new uint8_t[size]);
4167 // construct the shared structure in-place.
4168 new(mCblk) audio_track_cblk_t();
4169 // clear all buffers
4170 mCblk->frameCount = frameCount;
4171 mCblk->sampleRate = sampleRate;
4172 // uncomment the following lines to quickly test 32-bit wraparound
4173 // mCblk->user = 0xffff0000;
4174 // mCblk->server = 0xffff0000;
4175 // mCblk->userBase = 0xffff0000;
4176 // mCblk->serverBase = 0xffff0000;
4177 mChannelCount = channelCount;
4178 mChannelMask = channelMask;
4179 mBuffer = (char*)mCblk + sizeof(audio_track_cblk_t);
4180 memset(mBuffer, 0, frameCount*channelCount*sizeof(int16_t));
4181 // Force underrun condition to avoid false underrun callback until first data is
4182 // written to buffer (other flags are cleared)
4183 mCblk->flags = CBLK_UNDERRUN_ON;
4184 mBufferEnd = (uint8_t *)mBuffer + bufferSize;
4185 }
4186 }
看4168行的new,这就是C++语法中的placement new。干啥用的啊?new后面的括号中是一块buffer,再后面是一个类的构造函数。对了,这个placement new的意思就是在这块buffer中构
造一个对象。我们之前的普通new是没法让一个对象在某块指定的内存中创建的。而placement new却可以。这样不就达到我们的目的了吗?搞一块共享内存,再在这块内存上创建一个对
象。这样,这个对象不也就能在两个内存中共享了吗?太牛牛牛牛牛了。怎么想到的?
这里其实可以理解为audioflinger为AudioTrack创建了一块共享内存,把这块内存看成一个FIFO(audio_track_cblk_t)就好了,这样AudioTrack往里面放数据,而audioflinger从里
面取数据记过Mixer后输出到设备上。
然后回头继续createTrack_l,得到了指向IMemory指针后,设置音量等一些参数。
分析完ToneGenerator.cpp的ToneGenerator::initAudioTrack,接着就是ToneGenerator::startTone来播放音频了:
881 bool ToneGenerator::startTone(tone_type toneType, int durationMs) {
882 bool lResult = false;
883 status_t lStatus;
884
885 if ((toneType < 0) || (toneType >= NUM_TONES))
886 return lResult;
887
888 if (mState == TONE_IDLE) {
889 ALOGV("startTone: try to re-init AudioTrack");
890 if (!initAudioTrack()) {
891 return lResult;
892 }
893 }
..........................................................
916 if (mState == TONE_INIT) {
917 if (prepareWave()) {
918 ALOGV("Immediate start, time %d", (unsigned int)(systemTime()/1000000));
919 lResult = true;
920 mState = TONE_STARTING;
921 mLock.unlock();
922 mpAudioTrack->start();
923 mLock.lock();
924 if (mState == TONE_STARTING) {
925 ALOGV("Wait for start callback");
926 lStatus = mWaitCbkCond.waitRelative(mLock, seconds(3));
927 if (lStatus != NO_ERROR) {
928 ALOGE("--- Immediate start timed out, status %d", lStatus);
929 mState = TONE_IDLE;
930 lResult = false;
931 }
932 }
933 } else {
934 mState = TONE_IDLE;
935 }
936 } else {
937 ALOGV("Delayed start");
938 mState = TONE_RESTARTING;
939 lStatus = mWaitCbkCond.waitRelative(mLock, seconds(3));
940 if (lStatus == NO_ERROR) {
941 if (mState != TONE_IDLE) {
942 lResult = true;
943 }
944 ALOGV("cond received");
917行是调整音频输出频率从而得到各种音效,922行mpAudioTrack->start()开始播放,直接调用了AudioTrack的start方法:
367 void AudioTrack::start()
368 {
369 sp<AudioTrackThread> t = mAudioTrackThread;
370 status_t status = NO_ERROR;
371
372 ALOGV("start %p", this);
373
374 AutoMutex lock(mLock);
375 // acquire a strong reference on the IMemory and IAudioTrack so that they cannot be destroyed
376 // while we are accessing the cblk
377 sp<IAudioTrack> audioTrack = mAudioTrack;
378 sp<IMemory> iMem = mCblkMemory;
379 audio_track_cblk_t* cblk = mCblk;
380
381 if (!mActive) {
382 mFlushed = false;
383 mActive = true;
384 mNewPosition = cblk->server + mUpdatePeriod;
385 cblk->lock.lock();
386 cblk->bufferTimeoutMs = MAX_STARTUP_TIMEOUT_MS;
387 cblk->waitTimeMs = 0;
388 android_atomic_and(~CBLK_DISABLED_ON, &cblk->flags);
389 if (t != 0) {
390 t->resume();
391 } else {
392 mPreviousPriority = getpriority(PRIO_PROCESS, 0);
393 get_sched_policy(0, &mPreviousSchedulingGroup);
394 androidSetThreadPriority(0, ANDROID_PRIORITY_AUDIO);
395 }
396
397 ALOGV("start %p before lock cblk %p", this, mCblk);
398 if (!(cblk->flags & CBLK_INVALID_MSK)) {
399 cblk->lock.unlock();
400 ALOGV("mAudioTrack->start()");
401 status = mAudioTrack->start();
402 cblk->lock.lock();
403 if (status == DEAD_OBJECT) {
404 android_atomic_or(CBLK_INVALID_ON, &cblk->flags);
405 }
406 }
407 if (cblk->flags & CBLK_INVALID_MSK) {
408 status = restoreTrack_l(cblk, true);
409 }
410 cblk->lock.unlock();
411 if (status != NO_ERROR) {
412 ALOGV("start() failed");
413 mActive = false;
414 if (t != 0) {
415 t->pause();
416 } else {
417 setpriority(PRIO_PROCESS, 0, mPreviousPriority);
418 set_sched_policy(0, mPreviousSchedulingGroup);
419 }
420 }
421 }
422
423 }
看401行mAudioTrack->start(),这里的mAudioTrack就是调用audioFlinger->createTrack返回的track,也就是控制track的一个handle,所以进入
AudioFlinger::PlaybackThread::Track::start:
4539 status_t AudioFlinger::PlaybackThread::Track::start(AudioSystem::sync_event_t event,
4540 int triggerSession)
4541 {
4542 status_t status = NO_ERROR;
4543 ALOGV("start(%d), calling pid %d session %d",
4544 mName, IPCThreadState::self()->getCallingPid(), mSessionId);
4545
4546 sp<ThreadBase> thread = mThread.promote();
4547 if (thread != 0) {
4548 Mutex::Autolock _l(thread->mLock);
4549 track_state state = mState;
4550 // here the track could be either new, or restarted
4551 // in both cases "unstop" the track
4552 if (mState == PAUSED) {
4553 mState = TrackBase::RESUMING;
4554 ALOGV("PAUSED => RESUMING (%d) on thread %p", mName, this);
4555 } else {
4556 mState = TrackBase::ACTIVE;
4557 ALOGV("? => ACTIVE (%d) on thread %p", mName, this);
4558 }
4559
4560 if (!isOutputTrack() && state != ACTIVE && state != RESUMING) {
4561 thread->mLock.unlock();
4562 status = AudioSystem::startOutput(thread->id(), mStreamType, mSessionId);
4563 thread->mLock.lock();
4564
4565 #ifdef ADD_BATTERY_DATA
4566 // to track the speaker usage
4567 if (status == NO_ERROR) {
4568 addBatteryData(IMediaPlayerService::kBatteryDataAudioFlingerStart);
4569 }
4570 #endif
4571 }
4572 if (status == NO_ERROR) {
4573 PlaybackThread *playbackThread = (PlaybackThread *)thread.get();
4574 playbackThread->addTrack_l(this);
4575 } else {
4576 mState = state;
4577 triggerEvents(AudioSystem::SYNC_EVENT_PRESENTATION_COMPLETE);
4578 }
4579 } else {
4580 status = BAD_VALUE;
4581 }
4582 return status;
4583 }
4573行,取出之前保存的playbackThread,4574不是已经保存在vetor中了吗,怎么又要addTrack_l添加到别的地方去了,进去看看:
1888 status_t AudioFlinger::PlaybackThread::addTrack_l(const sp<Track>& track)
1889 {
1890 status_t status = ALREADY_EXISTS;
1891
1892 // set retry count for buffer fill
1893 track->mRetryCount = kMaxTrackStartupRetries;
1894 if (mActiveTracks.indexOf(track) < 0) {
1895 // the track is newly added, make sure it fills up all its
1896 // buffers before playing. This is to ensure the client will
1897 // effectively get the latency it requested.
1898 track->mFillingUpStatus = Track::FS_FILLING;
1899 track->mResetDone = false;
1900 track->mPresentationCompleteFrames = 0;
1901 mActiveTracks.add(track);
1902 if (track->mainBuffer() != mMixBuffer) {
1903 sp<EffectChain> chain = getEffectChain_l(track->sessionId());
1904 if (chain != 0) {
1905 ALOGV("addTrack_l() starting track on chain %p for session %d", chain.get(), track->sessionId());
1906 chain->incActiveTrackCnt();
1907 }
1908 }
1909
1910 status = NO_ERROR;
1911 }
1912
1913 ALOGV("mWaitWorkCV.broadcast");
1914 mWaitWorkCV.broadcast();
1915
1916 return status;
1917 }
1894行,先判断mActiveTracks有没有该track,没有的话则1901行把它加进来,看意思就知道这是一个活跃Track的数组,意思就是说这里的track都要干活啦
1914行,mWaitWorkCV.broadcast这是一个广播,通知谁呢?通知MixerThread(或者)线程开始干活了
2505 bool AudioFlinger::PlaybackThread::threadLoop()
2506 {
2507 Vector< sp<Track> > tracksToRemove;
2508
2509 standbyTime = systemTime();
2533 while (!exitPending())
2534 {
2535 cpuStats.sample(myName);
2536
2537 Vector< sp<EffectChain> > effectChains;
2538
2539 processConfigEvents();
2540
2541 { // scope for mLock
2542
2543 Mutex::Autolock _l(mLock);
2544
2545 if (checkForNewParameters_l()) {
2546 cacheParameters_l();
2547 }
2548
2549 saveOutputTracks();
应用层播放audiotrack的几个步骤:
1. new AudioTrack() --> 对应JNI native_setup
2. audio.play(); -->对应JNI native_start();
byte[] buffer = new buffer[4096];
3. audio.write(buffer, 0, 4096); --> 对应JNI native_write_byte
4. audio.stop();
5. audio.release();
1.android_media_AudioTrack.cpp中native_setup,主要的函数是lpTrack = new AudioTrack(),lpTrack->set(),而在set函数中,主要的函数是AudioSystem::getOutput(),
new AudioTrackThread(),下面开始分析:
AudioTrack.cpp:
180 status_t AudioTrack::set( ----------> audio_io_handle_t output = AudioSystem::getOutput(
AudioSystem.cpp:
audio_io_handle_t AudioSystem::getOutput( --------> return aps->getOutput(stream, samplingRate, format, channels, flags);
AudioPolicyService.cpp
audio_io_handle_t AudioPolicyService::getOutput( ---------> return mpAudioPolicy->get_output(mpAudioPolicy, stream, samplingRate, format,
channels, flags);
hardware/libhardware_legacy/audio/audio_policy_hal.cpp
lap->policy.get_output = ap_get_output; -----------> return lap->apm->getOutput((AudioSystem::stream_type)stream,
AudioPolicyManagerBase.cpp
audio_io_handle_t AudioPolicyManagerBase::getOutput(AudioSystem::stream_type stream, ------------> mTestOutputs[mCurOutput] = mpClientInterface-
>openOutput(0, &outputDesc->mDevice, -----------> mpClientInterface = clientInterface; ----------------->
AudioPolicyManagerBase::AudioPolicyManagerBase(AudioPolicyClientInterface *clientInterface)
这里,在AudioPolicyManagerDefault.h中:
25 class AudioPolicyManagerDefault: public AudioPolicyManagerBase
26 {
27
28 public:
29 AudioPolicyManagerDefault(AudioPolicyClientInterface *clientInterface)
30 : AudioPolicyManagerBase(clientInterface) {}
31
32 virtual ~AudioPolicyManagerDefault() {}
33
34 };
35 };
可见AudioPolicyManagerBase被AudioPolicyManagerDefault继承了,所以,看AudioPolicyManagerDefault构造函数:
AudioPolicyManagerDefault.cpp
24 extern "C" AudioPolicyInterface* createAudioPolicyManager(AudioPolicyClientInterface *clientInterface)
25 {
26 return new AudioPolicyManagerDefault(clientInterface);
27 }
接着看createAudioPolicyManager构造函数:
audio_policy_hal.cpp
lap->apm = createAudioPolicyManager(lap->service_client); ---------------> 359 lap->service_client = new AudioPolicyCompatClient(aps_ops, service);
AudioPolicyCompatClient的构造函数在AudioPolicyCompatClient.h中:
34 AudioPolicyCompatClient(struct audio_policy_service_ops *serviceOps,
35 void *service) :
36 mServiceOps(serviceOps) , mService(service) {}
...................................
struct audio_policy_service_ops* mServiceOps;
...............................
}
所以调用的是AudioPolicyCompatClient.cpp中的openOutput方法:
38 audio_io_handle_t AudioPolicyCompatClient::openOutput(audio_module_handle_t module,
39 audio_devices_t *pDevices,
40 uint32_t *pSamplingRate,
41 audio_format_t *pFormat,
42 audio_channel_mask_t *pChannelMask,
43 uint32_t *pLatencyMs,
44 audio_output_flags_t flags)
45 {
46 return mServiceOps->open_output_on_module(mService, module, pDevices, pSamplingRate,
47 pFormat, pChannelMask, pLatencyMs,
48 flags);
49 }
这里mServiceOps就是在前面AudioPolicyCompatClient.h的构造函数中赋值,所以回到audio_policy_hal.cpp中:
311 static int create_legacy_ap(const struct audio_policy_device *device,
312 struct audio_policy_service_ops *aps_ops,
313 void *service,
314 struct audio_policy **ap)
就是第2个service参数,
dev->device.create_audio_policy = create_legacy_ap;
而create_audio_policy是在AudioPolicyService.cpp的构造函数中被调用的:
83 rc = mpAudioPolicyDev->create_audio_policy(mpAudioPolicyDev, &aps_ops, this,
84 &mpAudioPolicy);
这里传进去的是aps_ops引用,它定义在AudioPolicyService中:
1537 namespace {
1538 struct audio_policy_service_ops aps_ops = {
1539 open_output : aps_open_output,
1540 open_duplicate_output : aps_open_dup_output,
1541 close_output : aps_close_output,
1542 suspend_output : aps_suspend_output,
1543 restore_output : aps_restore_output,
1544 open_input : aps_open_input,
1545 close_input : aps_close_input,
1546 set_stream_volume : aps_set_stream_volume,
1547 set_stream_output : aps_set_stream_output,
1548 set_parameters : aps_set_parameters,
1549 get_parameters : aps_get_parameters,
1550 start_tone : aps_start_tone,
1551 stop_tone : aps_stop_tone,
1552 set_voice_volume : aps_set_voice_volume,
1553 move_effects : aps_move_effects,
1554 load_hw_module : aps_load_hw_module,
1555 open_output_on_module : aps_open_output_on_module,
1556 open_input_on_module : aps_open_input_on_module,
1557 };
1558 }; // namespace <unnamed>
所以调用了aps_open_output_on_module:
1378 return af->openOutput(module, pDevices, pSamplingRate, pFormat, pChannelMask,
1379 pLatencyMs, flags);
最后调用了AudioFlinger.cpp的openOutput,这个函数前面已经分析过了
紧接着AudioTrack::set中调用完getOutput后,执行创建线程并开始运行它:
275 if (cbf != NULL) {
276 mAudioTrackThread = new AudioTrackThread(*this, threadCanCallJava);
277 mAudioTrackThread->run("AudioTrack", ANDROID_PRIORITY_AUDIO, 0 /*stack*/);
278 }
1459 bool AudioTrack::AudioTrackThread::threadLoop()
1460 {
1461 {
1462 AutoMutex _l(mMyLock);
1463 if (mPaused) {
1464 mMyCond.wait(mMyLock);
1465 // caller will check for exitPending()
1466 return true;
1467 }
1468 }
1469 if (!mReceiver.processAudioBuffer(this)) {
1470 pause();
1471 }
1472 return true;
1473 }
这里1463行mPaused在AudioTrackThread构造函数中默认初始化为true, 所以进入wait等待信号唤醒它
这样,我们在client端有了线程,service端也有了线程;接下来要做的就是client端不断的write数据,而service端不断的read数据。
接着:
281 status_t status = createTrack_l(streamType,
282 sampleRate,
283 format,
284 (uint32_t)channelMask,
285 frameCount,
286 flags,
287 sharedBuffer,
288 output);
在该函数中调用------------->audioFlinger->createTrack,这在前面已经详细分析过它的binder过程了
所以接着往下:
890 sp<IMemory> cblk = track->getCblk();
跨进程调用AudioFlinger.cpp中getCblk:
5767 sp<IMemory> AudioFlinger::TrackHandle::getCblk() const {
5768 return mTrack->getCblk();
5769 }
mTrack就是前面new Track()得到的,可是在track类中并没有看到Track::getCblk,所以可能是它的父类方法,看看他的继承关系:
class Track : public TrackBase, public VolumeProvider {
.....................................
}
而:
class TrackBase : public ExtendedAudioBufferProvider, public RefBase {
................................
sp<IMemory> getCblk() const { return mCblkMemory; }
................................
sp<IMemory> mCblkMemory;
}
mCblkMemory是在哪里赋值了啊?是在前面new Track()中构造了父类的TrackBase构造函数:
4099 AudioFlinger::ThreadBase::TrackBase::TrackBase(
..................................................
4133 if (client != NULL) {
4134 mCblkMemory = client->heap()->allocate(size);
4135 if (mCblkMemory != 0) {
4136 mCblk = static_cast<audio_track_cblk_t *>(mCblkMemory->pointer());
4137 if (mCblk != NULL) { // construct the shared structure in-place.
4138 new(mCblk) audio_track_cblk_t();
4139 // clear all buffers
4140 mCblk->frameCount = frameCount;
4141 mCblk->sampleRate = sampleRate;
4142 // uncomment the following lines to quickly test 32-bit wraparound
4143 // mCblk->user = 0xffff0000;
4144 // mCblk->server = 0xffff0000;
4145 // mCblk->userBase = 0xffff0000;
4146 // mCblk->serverBase = 0xffff0000;
4147 mChannelCount = channelCount;
4148 mChannelMask = channelMask;
4149 if (sharedBuffer == 0) {
4150 mBuffer = (char*)mCblk + sizeof(audio_track_cblk_t);
4151 memset(mBuffer, 0, frameCount*channelCount*sizeof(int16_t));
4152 // Force underrun condition to avoid false underrun callback until first data is
4153 // written to buffer (other flags are cleared)
4154 mCblk->flags = CBLK_UNDERRUN_ON;
4155 } else {
4156 mBuffer = sharedBuffer->pointer();
4157 }
4158 mBufferEnd = (uint8_t *)mBuffer + bufferSize;
4159 }
.......................................
}
4134行的client在AudioFlinger::createTrack中已经分析过。
所以这里getCblk最终返回1438行的mCblk,这个前面返回BpAudioTrack一样,也是返回了一个binder引用,即这里interface_cast<IMemory>(reply.readStrongBinder());可见
是一个BpMemory代理啊,来和匿名共享内存通讯的
哇,终于讲完set函数了
2. 接着,第二步,分析JNI的native_start(),调用的是AudioTrack::start(),在start函数中,调用mAudioTrack->start(),调用mAudioTrack就是前面返回的TrackHandle远程代
理所以进入audioflinger中看看:
5771 status_t AudioFlinger::TrackHandle::start() {
5772 return mTrack->start();
5773 }
果然自己不干活,交给了track去做,所以进入PlaybackThread::Track::start中:
4573 PlaybackThread *playbackThread = (PlaybackThread *)thread.get();
4574 playbackThread->addTrack_l(this);
它主要就是做了上面两件事情,获得当前的线程,然后将它加入到活跃组中去,addTrack前面已经分析过啦,它唤醒了playbackThread线程,可是这时候playbackThread线程虽然运行
了,但还没有数据过来,所以继续往下。
3. 到了第三步,分析JNI的native_write_byte(),audiotrack端会不断赋值数据,这样,audiotrack的threadloop中终于可以工作了:
1459 bool AudioTrack::AudioTrackThread::threadLoop()
1460 {
1461 {
1462 AutoMutex _l(mMyLock);
1463 if (mPaused) {
1464 mMyCond.wait(mMyLock);
1465 // caller will check for exitPending()
1466 return true;
1467 }
1468 }
1469 if (!mReceiver.processAudioBuffer(this)) {
1470 pause();
1471 }
1472 return true;
1473 }
在processAudioBuffer函数主要用memcpy_to_i16_from_u8函数将数据拷贝到共享内存中;而在audiofinger的playbackThread threadloop中就开始接受数据了,进入
AudioFlinger::PlaybackThread::threadLoop()看它的实现:
首先checkForNewParameters_l,检查当前的配置,看是否有更新,比如这时候打开了蓝牙等等;
接着checkSilentMode_l(),它会检查时候开启了静音模式,检查的方法是property_get("ro.audio.silent", value, "0");
接着prepareTracks_l,这涉及到音频信号处理的知识,我只能说函数很长而且异常复杂,不影响我们分析;
接着threadLoop_mix(),它在AudioMixer.cpp中:
602 void AudioMixer::process(int64_t pts)
603 {
604 mState.hook(&mState, pts);
605 }
是一个回调函数啊,具体就不分析了,想知道的话可以看这个类的构造函数,总之,它可能有的值是:
process__validate
process__nop
process__genericNoResampling
process__genericResampling
process__OneTrack16BitsStereoNoResampling
process__TwoTracks16BitsStereoNoResampling
就是在这写函数中获取audiotrack写过来的数据的,并且进行了音频处理,很复杂啊,看不懂
接着threadLoop_write(),它有调用PlaybackThread::threadLoop_write():
bytesWritten = (int)mOutput->stream->write(mOutput->stream, mMixBuffer, mixBufferSize),终于把数据写到output中去了,看到这,我的心也碎了!!!
hal层的so库文件在device/softwinner/common/hardware/audio中编译生成,该路径下的audio_hw.c对上主要实现了android hal层so库的标准接口供audiofliger调用,对下主要通
过调用android标准的tinymix接口来控制底层驱动,从而实现音量控制,音频通路的切换等,tinymix驱动路径在external/tinyalsa中,它会编译生成tinyalsa可执行文件和
libtinyalsa.so库文件,其中库文件可以用来在终端命令行直接控制底层音频,而so库供提供库函数和audio_hw.c一起编译,从而实现通过audio_hw.c调用。
先从上层常用的接口讲起,这样便于理解,否则看完底层,其实也不知道到底怎么用。如应用层常用到的AudioSystem.setParameters("routing=8192");这表示设置当前音频通道的输出为那一路,看看它是如何从上层一路控制底层硬件输出的
通过aidl调用frameworks/base/media/java/android/media/AudioSystem.java的setParameters:
public static native int setParameters(String keyValuePairs);
这里又调用JNI的方法,在core/jni/android_media_AudioSystem.cpp 中:
79 static int
80 android_media_AudioSystem_setParameters(JNIEnv *env, jobject thiz, jstring keyValuePairs)
81 {
82 const jchar* c_keyValuePairs = env->GetStringCritical(keyValuePairs, 0);
83 String8 c_keyValuePairs8;
84 if (keyValuePairs) {
85 c_keyValuePairs8 = String8(c_keyValuePairs, env->GetStringLength(keyValuePairs));
86 env->ReleaseStringCritical(keyValuePairs, c_keyValuePairs);
87 }
88 int status = check_AudioSystem_Command(AudioSystem::setParameters(0, c_keyValuePairs8));
89 return status;
90 }
88行调用media/libmedia/AudioSystem.cpp方法:
167 status_t AudioSystem::setParameters(audio_io_handle_t ioHandle, const String8& keyValuePairs) {
168 const sp<IAudioFlinger>& af = AudioSystem::get_audio_flinger();
169 if (af == 0) return PERMISSION_DENIED;
170 return af->setParameters(ioHandle, keyValuePairs);
171 }
710行,调用了AudioFlinger.cpp方法:
747 if (ioHandle == 0) {
748 AutoMutex lock(mHardwareLock);
749 mHardwareStatus = AUDIO_SET_PARAMETER;
750 status_t final_result = NO_ERROR;
751 for (size_t i = 0; i < mAudioHwDevs.size(); i++) {
752 audio_hw_device_t *dev = mAudioHwDevs[i];
753 result = dev->set_parameters(dev, keyValuePairs.string());
754 final_result = result ?: final_result;
755 }
753行,这里最终调用了hal层的set_parameters,所以进入device/softwinner/common/hardware/audio中:
adev->hw_device.set_parameters = adev_set_parameters;
----->
到这里,最后会通过str_parms_create_str将会把值放到哈系表中去,用str_parms_get_str可以将值取出来,供HAL层判断当前的输出设备为那一个
HAL层的音频库一般会编译成为audio.primary.default.so audio.primary.exDroid.so这两个库,其中exDroid为$(TARGET_BOARD_PLATFORM),即自己目标平台的名字,那我们的
android系统到底加载其中的那一个呢,这就要看hardware/libhardware/hardware.c中的hw_get_module_by_class函数了,这个函数会遍历一下数组,如果找不到,才会用default的:
45 static const char *variant_keys[] = {
46 "ro.hardware", /* This goes first so that it can pick up a different
47 file on the emulator. */
48 "ro.product.board",
49 "ro.board.platform",
50 "ro.arch"
51 };
我们看到ro.product.board的属性就是$(TARGET_BOARD_PLATFORM),所以加载的是自己平台的so库,即audio.primary.exDroid.so
再来看看audioflinger.cpp中一些常用的函数,播放声音时候首先创建播放线程,调用:
6753 audio_io_handle_t AudioFlinger::openOutput(audio_module_handle_t module,
6754 audio_devices_t *pDevices,
6755 uint32_t *pSamplingRate,
6756 audio_format_t *pFormat,
6757 audio_channel_mask_t *pChannelMask,
6758 uint32_t *pLatencyMs,
6759 audio_output_flags_t flags)
6760 {
....................................................................................
6785 outHwDev = findSuitableHwDev_l(module, *pDevices);
6786 if (outHwDev == NULL)
6787 return 0;
6788
6789 audio_io_handle_t id = nextUniqueId();
6790
6791 mHardwareStatus = AUDIO_HW_OUTPUT_OPEN;
6792
6793 status = outHwDev->open_output_stream(outHwDev,
6794 id,
6795 *pDevices,
6796 (audio_output_flags_t)flags,
6797 &config,
6798 &outStream);
6799
6800 mHardwareStatus = AUDIO_HW_IDLE;
..............................................................................................
6808 if (status == NO_ERROR && outStream != NULL) {
6809 AudioStreamOut *output = new AudioStreamOut(outHwDev, outStream);
6810
6811 if ((flags & AUDIO_OUTPUT_FLAG_DIRECT) ||
6812 (config.format != AUDIO_FORMAT_PCM_16_BIT) ||
6813 (config.channel_mask != AUDIO_CHANNEL_OUT_STEREO)) {
6814 thread = new DirectOutputThread(this, output, id, *pDevices);
6815 ALOGV("openOutput() created direct output: ID %d thread %p", id, thread);
6816 } else {
6817 thread = new MixerThread(this, output, id, *pDevices);
6818 ALOGV("openOutput() created mixer output: ID %d thread %p", id, thread);
6819 }
6820 mPlaybackThreads.add(id, thread);
这里主要是打开硬件设备,设置一些硬件的默认参数,入音量等,然后根据flags标记创建DirectOutputThread或者MixerThread,我们看他在AudioFlinger.h的定义:
class DirectOutputThread : public PlaybackThread {.................}
而PlaybackThread继承关系:
class PlaybackThread : public ThreadBase {...................}
可见他们都是PlaybackThread的子类,然后在6820行,将该thread添加到mPlaybackThreads中,mPlaybackThreads是一个vetor,它以id作为索引,将该线程保存起来,并返回给调用
者,后续播放声音时候通过传进该id(也就是audio_io_handle_t),从该vetor取就可以了。
什么时候开始运行这个线程呢,它是在创建线程时候就启动了,看如下函数就知道了:
1652 void AudioFlinger::PlaybackThread::onFirstRef()
1653 {
1654 run(mName, ANDROID_PRIORITY_URGENT_AUDIO);
1655 }
上面函数是播放时候调用,如果录音则流程一样相似,调用的是openInput:
6970 audio_io_handle_t AudioFlinger::openInput(audio_module_handle_t module,
6971 audio_devices_t *pDevices,
6972 uint32_t *pSamplingRate,
6973 audio_format_t *pFormat,
6974 uint32_t *pChannelMask)
6975 {
...................................................................
6995 inHwDev = findSuitableHwDev_l(module, *pDevices);
6996 if (inHwDev == NULL)
6997 return 0;
6998
6999 audio_io_handle_t id = nextUniqueId();
7000
7001 status = inHwDev->open_input_stream(inHwDev, id, *pDevices, &config,
7002 &inStream);
..................................................................................
7022 if (status == NO_ERROR && inStream != NULL) {
7023 AudioStreamIn *input = new AudioStreamIn(inHwDev, inStream);
7024
7025 // Start record thread
7026 // RecorThread require both input and output device indication to forward to audio
7027 // pre processing modules
7028 uint32_t device = (*pDevices) | primaryOutputDevice_l();
7029 thread = new RecordThread(this,
7030 input,
7031 reqSamplingRate,
7032 reqChannels,
7033 id,
7034 device);
7035 mRecordThreads.add(id, thread);
7036 ALOGV("openInput() created record thread: ID %d thread %p", id, thread);
7037 if (pSamplingRate != NULL) *pSamplingRate = reqSamplingRate;
7038 if (pFormat != NULL) *pFormat = config.format;
7039 if (pChannelMask != NULL) *pChannelMask = reqChannels;
7040
7041 input->stream->common.standby(&input->stream->common);
7042
7043 // notify client processes of the new input creation
7044 thread->audioConfigChanged_l(AudioSystem::INPUT_OPENED);
7045 return id;
7046 }
这里7029行的 RecordThread继承关系:
class RecordThread : public ThreadBase, public AudioBufferProvider
接着开始播放声音,调用的是createTrack:
438 sp<IAudioTrack> AudioFlinger::createTrack(
439 pid_t pid,
440 audio_stream_type_t streamType,
441 uint32_t sampleRate,
442 audio_format_t format,
443 uint32_t channelMask,
444 int frameCount,
445 IAudioFlinger::track_flags_t flags,
446 const sp<IMemory>& sharedBuffer,
447 audio_io_handle_t output,
448 pid_t tid,
449 int *sessionId,
450 status_t *status)
451 {
466 {
467 Mutex::Autolock _l(mLock);
468 PlaybackThread *thread = checkPlaybackThread_l(output);
469 PlaybackThread *effectThread = NULL;
470 if (thread == NULL) {
471 ALOGE("unknown output thread");
472 lStatus = BAD_VALUE;
473 goto Exit;
474 }
475
476 client = registerPid_l(pid);
502 track = thread->createTrack_l(client, streamType, sampleRate, format,
503 channelMask, frameCount, sharedBuffer, lSessionId, flags, tid, &lStatus);
504
505 // move effect chain to this output thread if an effect on same session was waiting
506 // for a track to be created
507 if (lStatus == NO_ERROR && effectThread != NULL) {
508 Mutex::Autolock _dl(thread->mLock);
509 Mutex::Autolock _sl(effectThread->mLock);
510 moveEffectChain_l(lSessionId, effectThread, thread, true);
511 }
512
513 // Look for sync events awaiting for a session to be used.
514 for (int i = 0; i < (int)mPendingSyncEvents.size(); i++) {
515 if (mPendingSyncEvents[i]->triggerSession() == lSessionId) {
516 if (thread->isValidSyncEvent(mPendingSyncEvents[i])) {
517 if (lStatus == NO_ERROR) {
518 track->setSyncEvent(mPendingSyncEvents[i]);
519 } else {
520 mPendingSyncEvents[i]->cancel();
521 }
522 mPendingSyncEvents.removeAt(i);
523 i--;
524 }
525 }
526 }
528 if (lStatus == NO_ERROR) {
529 trackHandle = new TrackHandle(track);
530 } else {
531 // remove local strong reference to Client before deleting the Track so that the Client
532 // destructor is called by the TrackBase destructor with mLock held
533 client.clear();
534 track.clear();
535 }
536
537 Exit:
538 if (status != NULL) {
539 *status = lStatus;
540 }
541 return trackHandle;
476行的函数:
422 sp<AudioFlinger::Client> AudioFlinger::registerPid_l(pid_t pid)
423 {
424 // If pid is already in the mClients wp<> map, then use that entry
425 // (for which promote() is always != 0), otherwise create a new entry and Client.
426 sp<Client> client = mClients.valueFor(pid).promote();
427 if (client == 0) {
428 client = new Client(this, pid);
429 mClients.add(pid, client);
430 }
431
432 return client;
433 }
我们第一次进来,client为null,所以进入428行:
5685 AudioFlinger::Client::Client(const sp<AudioFlinger>& audioFlinger, pid_t pid)
5686 : RefBase(),
5687 mAudioFlinger(audioFlinger),
5688 // FIXME should be a "k" constant not hard-coded, in .h or ro. property, see 4 lines below
5689 mMemoryDealer(new MemoryDealer(1024*1024, "AudioFlinger::Client")),
5690 mPid(pid),
5691 mTimedTrackCount(0)
5692 {
5693 // 1 MB of address space is good for 32 tracks, 8 buffers each, 4 KB/buffer
5694 }
申请了一块内存啊,接着往下。
这里型参中传进来的output参数就是前面加入vetor的id,通过468行checkPlaybackThread_l函数将前面的thread取出来,接着502行,创建PlaybackThread::Track类,从中可以看到
一个线程可以有多个track,对应着不同的音频,比如,在统一个进程中,我们可以边播电影边听音乐,同时有两个track输出。进入看看该函数:
1658 sp<AudioFlinger::PlaybackThread::Track> AudioFlinger::PlaybackThread::createTrack_l(
1659 const sp<AudioFlinger::Client>& client,
1660 audio_stream_type_t streamType,
1661 uint32_t sampleRate,
1662 audio_format_t format,
1663 uint32_t channelMask,
1664 int frameCount,
1665 const sp<IMemory>& sharedBuffer,
1666 int sessionId,
1667 IAudioFlinger::track_flags_t flags,
1668 pid_t tid,
1669 status_t *status)
...................................................................................
1759 lStatus = initCheck();
1760 if (lStatus != NO_ERROR) {
1761 ALOGE("Audio driver not initialized.");
1762 goto Exit;
1763 }
1764
1765 { // scope for mLock
1766 Mutex::Autolock _l(mLock);
1767
1768 // all tracks in same audio session must share the same routing strategy otherwise
1769 // conflicts will happen when tracks are moved from one output to another by audio policy
1770 // manager
1771 uint32_t strategy = AudioSystem::getStrategyForStream(streamType);
1772 for (size_t i = 0; i < mTracks.size(); ++i) {
1773 sp<Track> t = mTracks[i];
1774 if (t != 0 && !t->isOutputTrack()) {
1775 uint32_t actual = AudioSystem::getStrategyForStream(t->streamType());
1776 if (sessionId == t->sessionId() && strategy != actual) {
1777 ALOGE("createTrack_l() mismatched strategy; expected %u but found %u",
1778 strategy, actual);
1779 lStatus = BAD_VALUE;
1780 goto Exit;
1781 }
1782 }
1783 }
1784
1785 if (!isTimed) {
1786 track = new Track(this, client, streamType, sampleRate, format,
1787 channelMask, frameCount, sharedBuffer, sessionId, flags);
1788 } else {
1789 track = TimedTrack::create(this, client, streamType, sampleRate, format,
1790 channelMask, frameCount, sharedBuffer, sessionId);
1791 }
1796 mTracks.add(track);
1789行创建track实例,1796行将实例加入mTracks,mTracks也是一个vetor。
然后回到createTrack中,529行创建刚new出来的track的handle,并返回给调用者,看看这里的TrackHandle继承关系:
class TrackHandle : public android::BnAudioTrack { ............. }
可以看到TrackHandle继承自BnAudioTrack,createTrack的调用者可以通过IAudioTrack接口与AudioFlinger中对应的Track实例交互,这里又不得不说一下binder间的通讯机制了,我
们在audiotrack中调用audioflinger,本来就是跨进程调用的,这里服务端返回给audiotrack进程的是trackHandle,那是不是这就接把这个指针地址复制回给audiotrack就行了呢?
当然不行,因为这个地址在audiotrack的进程是无效的,两个进程的地址是独立的,那么这就要在binder驱动中做转换了,在binder驱动中会返回一个引用给audiotrack,我们看
IAudioFlinger.cpp中的继承关系:
class BnAudioTrack : public BnInterface<IAudioTrack>
template<typename INTERFACE>
class BnInterface : public INTERFACE, public BBinder
class BBinder : public IBinder
class IBinder : public virtual RefBase
class IAudioTrack : public IInterface
class IInterface : public virtual RefBase
所以有如下关系:
BnAudioTrack -- BnInterface -- IAudioTrack -- IInterface -- RefBase
|
---- BBinder -- IBinder -- RefBase
IAudioFlinger.cpp的BpAudioFlinger实现了客户端的函数接口:
virtual sp<IAudioTrack> createTrack(
...................
status_t lStatus = remote()->transact(CREATE_TRACK, data, &reply);
125 track = interface_cast<IAudioTrack>(reply.readStrongBinder());
126 }
127 if (status) {
128 *status = lStatus;
129 }
130 return track;
131 }
而进程另一端的IAudioFlinger.cpp的BnAudioFlinger::onTransact中:
709 sp<IAudioTrack> track = createTrack(pid,
710 (audio_stream_type_t) streamType, sampleRate, format,
711 channelCount, bufferCount, flags, buffer, output, tid, &sessionId, &status);
712 reply->writeInt32(sessionId);
713 reply->writeInt32(status);
714 reply->writeStrongBinder(track->asBinder());
715 return NO_ERROR;
这里的remote()->transact跨进程调用了service端的onTransact的createTrack,这里的createTrack函数实现就是AudioFlinger.cpp的createTrack。
track->asBinder是什么意思呢?看看asBinder的定义:
IInterface.cpp:
30 sp<IBinder> IInterface::asBinder()
31 {
32 return this ? onAsBinder() : NULL;
33 }
IInterface.h:
141 inline IBinder* BpInterface<INTERFACE>::onAsBinder()
142 {
143 return remote();
144 }
和
128 template<typename INTERFACE>
129 IBinder* BnInterface<INTERFACE>::onAsBinder()
130 {
131 return this;
132 }
Binder.h:
inline IBinder* remote() { return mRemote; }
IBinder* const mRemote;
我们这里是BnInterface,所以返回的是return this;返回的还是track对象(直接写writeStrongBinder(track)不就完了吗?我觉得是可以的,因为其他地方都是这样写的)。
接着audiotrack端interface_cast<IAudioTrack>模板展开后就是new BpAudioTrack(reply.readStrongBinder()):
Parcel.cpp:
1040 sp<IBinder> Parcel::readStrongBinder() const
1041 {
1042 sp<IBinder> val;
1043 unflatten_binder(ProcessState::self(), *this, &val);
1044 return val;
1045 }
函数unflatten_binder:
236 status_t unflatten_binder(const sp<ProcessState>& proc,
237 const Parcel& in, sp<IBinder>* out)
238 {
239 const flat_binder_object* flat = in.readObject(false);
240
241 if (flat) {
242 switch (flat->type) {
243 case BINDER_TYPE_BINDER:
244 *out = static_cast<IBinder*>(flat->cookie);
245 return finish_unflatten_binder(NULL, *flat, in);
246 case BINDER_TYPE_HANDLE:
247 *out = proc->getStrongProxyForHandle(flat->handle);
248 return finish_unflatten_binder(
249 static_cast<BpBinder*>(out->get()), *flat, in);
250 }
251 }
252 return BAD_TYPE;
253 }
所以变成ProcessState::self()->getStrongProxyForHandle,这里是全局的静态变量:
74 sp<ProcessState> ProcessState::self()
75 {
76 Mutex::Autolock _l(gProcessMutex);
77 if (gProcess != NULL) {
78 return gProcess;
79 }
80 gProcess = new ProcessState;
81 return gProcess;
82 }
可以看到,是一个单例模式,它在哪里被打开过了呢,是在audiotrack的进程在之前获得audioflinger服务时候打开的,说明一个进程只有一块内存映射空间,所以就直接返回了有了内
存空间,又有了new BpAudioTrack(),而且有了远程BnAudioTrack这个binder在本地的引用reply.readStrongBinder(),就可以和远程通讯啦。而且我们看到audiotrack这个binder
并没有把自己添加到servicemanager中去,所以它是个匿名binder哦
为了便于理解上面的audioflinger和audiotrack之间的关系,引用网上一位大牛的话:
"Trackhandle是AT端调用AF的CreateTrack得到的一个基于Binder机制的Track。
这个TrackHandle实际上是对真正干活的PlaybackThread::Track的一个跨进程支持的封装。
什么意思?本来PlaybackThread::Track是真正在AF中干活的东西,不过为了支持跨进程的话,我们用TrackHandle对其进行了一下包转。这样在AudioTrack调用TrackHandle
的功能,实际都由TrackHandle调用PlaybackThread::Track来完成了。可以认为是一种Proxy模式吧。
这个就是AudioFlinger异常复杂的一个原因!!!"
track提供了很多控制音频的方法给调用者:
4373 /*static*/ void AudioFlinger::PlaybackThread::Track::appendDumpHeader(String8& result)
4379 void AudioFlinger::PlaybackThread::Track::dump(char* buffer, size_t size)
4464 status_t AudioFlinger::PlaybackThread::Track::getNextBuffer(
AudioBufferProvider::Buffer* buffer, int64_t pts)
4522 size_t AudioFlinger::PlaybackThread::Track::framesReady()
4527 bool AudioFlinger::PlaybackThread::Track::isReady()
4539 status_t AudioFlinger::PlaybackThread::Track::start(AudioSystem::sync_event_t event,
int triggerSession)
4585 void AudioFlinger::PlaybackThread::Track::stop()
4620 void AudioFlinger::PlaybackThread::Track::pause()
4643 void AudioFlinger::PlaybackThread::Track::flush()
4667 void AudioFlinger::PlaybackThread::Track::reset()
4685 void AudioFlinger::PlaybackThread::Track::mute(bool muted)
4690 status_t AudioFlinger::PlaybackThread::Track::attachAuxEffect(int EffectId)
4739 void AudioFlinger::PlaybackThread::Track::setAuxBuffer(int EffectId, int32_t *buffer)
4745 bool AudioFlinger::PlaybackThread::Track::presentationComplete(size_t framesWritten,
4765 void AudioFlinger::PlaybackThread::Track::triggerEvents(AudioSystem::sync_event_t type)
4778 uint32_t AudioFlinger::PlaybackThread::Track::getVolumeLR()
4803 status_t AudioFlinger::PlaybackThread::Track::setSyncEvent(const sp<SyncEvent>& event)
等等
而录音的流程和上面也相似,调用的函数如下:
5836 sp<IAudioRecord> AudioFlinger::openRecord(
5837 pid_t pid,
5838 audio_io_handle_t input,
5839 uint32_t sampleRate,
5840 audio_format_t format,
5841 uint32_t channelMask,
5842 int frameCount,
5843 IAudioFlinger::track_flags_t flags,
5844 int *sessionId,
5845 status_t *status)
5846 {
5864 thread = checkRecordThread_l(input);
5865 if (thread == NULL) {
5866 lStatus = BAD_VALUE;
5867 goto Exit;
5868 }
5869
5870 client = registerPid_l(pid);
5881 // create new record track. The record track uses one track in mHardwareMixerThread by convention.
5882 recordTrack = thread->createRecordTrack_l(client,
5883 sampleRate,
5884 format,
5885 channelMask,
5886 frameCount,
5887 lSessionId,
5888 &lStatus);
5889 }
5890 if (lStatus != NO_ERROR) {
5891 // remove local strong reference to Client before deleting the RecordTrack so that the Client
5892 // destructor is called by the TrackBase destructor with mLock held
5893 client.clear();
5894 recordTrack.clear();
5895 goto Exit;
5896 }
5897
5898 // return to handle to client
5899 recordHandle = new RecordHandle(recordTrack);
5900 lStatus = NO_ERROR;
5901
5902 Exit:
5903 if (status) {
5904 *status = lStatus;
5905 }
5906 return recordHandle;
5907 }
同样,recordtarck也提供了方法给调用者使用:
5367 status_t AudioFlinger::RecordThread::RecordTrack::getNextBuffer(AudioBufferProvider::Buffer* buffer, int64_t pts)
5406 status_t AudioFlinger::RecordThread::RecordTrack::start(AudioSystem::sync_event_t event,
int triggerSession)
5418 void AudioFlinger::RecordThread::RecordTrack::stop()
5431 void AudioFlinger::RecordThread::RecordTrack::dump(char* buffer, size_t size)
而AudioFlinger还有另外一个要注意的函数,openDuplicateOutput,他是打开两个硬件设备时候调用,比如,同时打开喇叭和蓝牙:
6879 audio_io_handle_t AudioFlinger::openDuplicateOutput(audio_io_handle_t output1,
6880 audio_io_handle_t output2)
6881 {
6882 Mutex::Autolock _l(mLock);
6883 MixerThread *thread1 = checkMixerThread_l(output1);
6884 MixerThread *thread2 = checkMixerThread_l(output2);
6885
6886 if (thread1 == NULL || thread2 == NULL) {
6887 ALOGW("openDuplicateOutput() wrong output mixer type for output %d or %d", output1, output2);
6888 return 0;
6889 }
6890
6891 audio_io_handle_t id = nextUniqueId();
6892 DuplicatingThread *thread = new DuplicatingThread(this, thread1, id);
6893 thread->addOutputTrack(thread2);
6894 mPlaybackThreads.add(id, thread);
6895 // notify client processes of the new output creation
6896 thread->audioConfigChanged_l(AudioSystem::OUTPUT_OPENED);
6897 return id;
6898 }
接着从frameworks/av/media/libmedia/ToneGenerator.cpp来看看如何使用audiotrack,看看它是如何和AudioFlinger打交道的,看初始化函数:
1011 bool ToneGenerator::initAudioTrack() {
1012
1013 if (mpAudioTrack) {
1014 delete mpAudioTrack;
1015 mpAudioTrack = NULL;
1016 }
1017
1018 // Open audio track in mono, PCM 16bit, default sampling rate, default buffer size
1019 mpAudioTrack = new AudioTrack();
1020 ALOGV("Create Track: %p", mpAudioTrack);
1021
1022 mpAudioTrack->set(mStreamType,
1023 0, // sampleRate
1024 AUDIO_FORMAT_PCM_16_BIT,
1025 AUDIO_CHANNEL_OUT_MONO,
1026 0, // frameCount
1027 AUDIO_OUTPUT_FLAG_FAST,
1028 audioCallback,
1029 this, // user
1030 0, // notificationFrames
1031 0, // sharedBuffer
1032 mThreadCanCallJava);
1033
1034 if (mpAudioTrack->initCheck() != NO_ERROR) {
1035 ALOGE("AudioTrack->initCheck failed");
1036 goto initAudioTrack_exit;
1037 }
1038
1039 mpAudioTrack->setVolume(mVolume, mVolume);
1040
1041 mState = TONE_INIT;
1042
1043 return true;
1019行new AudioTrack(),代码在frameworks/av/media/libmedia/AudioTrack.cpp中:
89 AudioTrack::AudioTrack()
90 : mStatus(NO_INIT),
91 mIsTimed(false),
92 mPreviousPriority(ANDROID_PRIORITY_NORMAL),
93 mPreviousSchedulingGroup(SP_DEFAULT)
94 {
95 }
这里好像没有做什么事情,继续往下看1022行mpAudioTrack->set:
180 status_t AudioTrack::set(
181 audio_stream_type_t streamType,
182 uint32_t sampleRate,
183 audio_format_t format,
184 int channelMask,
185 int frameCount,
186 audio_output_flags_t flags,
187 callback_t cbf,
188 void* user,
189 int notificationFrames,
190 const sp<IMemory>& sharedBuffer,
191 bool threadCanCallJava,
192 int sessionId)
193 {
255 audio_io_handle_t output = AudioSystem::getOutput(
256 streamType,
257 sampleRate, format, channelMask,
258 flags);
259
260 if (output == 0) {
261 ALOGE("Could not get audio output for stream type %d", streamType);
262 return BAD_VALUE;
263 }
275 if (cbf != NULL) {
276 mAudioTrackThread = new AudioTrackThread(*this, threadCanCallJava);
277 mAudioTrackThread->run("AudioTrack", ANDROID_PRIORITY_AUDIO, 0 /*stack*/);
278 }
279
280 // create the IAudioTrack
281 status_t status = createTrack_l(streamType,
282 sampleRate,
283 format,
284 (uint32_t)channelMask,
285 frameCount,
286 flags,
287 sharedBuffer,
288 output);
289
290 if (status != NO_ERROR) {
291 if (mAudioTrackThread != 0) {
292 mAudioTrackThread->requestExit();
293 mAudioTrackThread.clear();
294 }
295 return status;
296 }
这里255行的getOutput经过hardware/libhardware_legacy/audio目录下的音频策略,决定那个输出设备,并最终会调用上面AudioFlinger.cpp中的AudioFlinger::openOutput来打开
输出设备和创建一个MixerThread(或者DirectOutputThread)线程,并且会返回一个audio_io_handle_t给output,这个前面已经说过,这是该线程独有的id,后面通过这个id从vetor就
可以找到对应的线程了,接着往下看276行new AudioTrackThread,AudioTrackThread的继承关系如下:
459 /* a small internal class to handle the callback */
460 class AudioTrackThread : public Thread
461 {
462 public:
463 AudioTrackThread(AudioTrack& receiver, bool bCanCallJava = false);
464
465 // Do not call Thread::requestExitAndWait() without first calling requestExit().
466 // Thread::requestExitAndWait() is not virtual, and the implementation doesn't do enough.
467 virtual void requestExit();
468
469 void pause(); // suspend thread from execution at next loop boundary
470 void resume(); // allow thread to execute, if not requested to exit
471
472 private:
可见是一个线程类,调用run后将会执行子线程threadLoop,看看构造函数:
1450 AudioTrack::AudioTrackThread::AudioTrackThread(AudioTrack& receiver, bool bCanCallJava)
1451 : Thread(bCanCallJava), mReceiver(receiver), mPaused(true)
1452 {
1453 }
把当前类AudioTrack赋值给mReceiver,接着就run方法,开始执行线程(普通java线程是调用start方法后启动线程哦):
1459 bool AudioTrack::AudioTrackThread::threadLoop()
1460 {
1461 {
1462 AutoMutex _l(mMyLock);
1463 if (mPaused) {
1464 mMyCond.wait(mMyLock);
1465 // caller will check for exitPending()
1466 return true;
1467 }
1468 }
1469 if (!mReceiver.processAudioBuffer(this)) {
1470 pause();
1471 }
1472 return true;
1473 }
可以看到,主要是1469的函数processAudioBuffer~~~~~~~~~~~~~~~(回头再分析这里)
分析完线程启动后接着往下,继续分析AudioTrack.cpp中set函数,接着到了281行createTrack_l:
743 status_t AudioTrack::createTrack_l(
744 audio_stream_type_t streamType,
745 uint32_t sampleRate,
746 audio_format_t format,
747 uint32_t channelMask,
748 int frameCount,
749 audio_output_flags_t flags,
750 const sp<IMemory>& sharedBuffer,
751 audio_io_handle_t output)
752 {
................函数很长啊,前面的主要是计算采样频率,buffer大小等等.......................
873 sp<IAudioTrack> track = audioFlinger->createTrack(getpid(),
874 streamType,
875 sampleRate,
876 format,
877 channelMask,
878 frameCount,
879 trackFlags,
880 sharedBuffer,
881 output,
882 tid,
883 &mSessionId,
884 &status);
885
886 if (track == 0) {
887 ALOGE("AudioFlinger could not create track, status: %d", status);
888 return status;
889 }
890 sp<IMemory> cblk = track->getCblk();
891 if (cblk == 0) {
892 ALOGE("Could not get control block");
893 return NO_INIT;
894 }
895 mAudioTrack = track;
896 mCblkMemory = cblk;
897 mCblk = static_cast<audio_track_cblk_t*>(cblk->pointer());
898 // old has the previous value of mCblk->flags before the "or" operation
899 int32_t old = android_atomic_or(CBLK_DIRECTION_OUT, &mCblk->flags);
..................................................
来看873行,调用了audioFlinger的createTrack,这个前面已经分析过了啊,主要是常见了tarck,并且返回控制该track的handle。
接着890的getCblk
5767 sp<IMemory> AudioFlinger::TrackHandle::getCblk() const {
5768 return mTrack->getCblk();
5769 }
这里的mTrack,看AudioFlinger的TrackHandle构造函数:
5753 AudioFlinger::TrackHandle::TrackHandle(const sp<AudioFlinger::PlaybackThread::Track>& track)
5754 : BnAudioTrack(),
5755 mTrack(track)
5756 {
5757 }
可以看到mTrack已经赋值track,也就是调用track的getCblk,而且这里的track是PlaybackThread子类的,所以又看Track的构造函数:
4273 AudioFlinger::PlaybackThread::Track::Track(
4274 PlaybackThread *thread,
4275 const sp<Client>& client,
4276 audio_stream_type_t streamType,
4277 uint32_t sampleRate,
4278 audio_format_t format,
4279 uint32_t channelMask,
4280 int frameCount,
4281 const sp<IMemory>& sharedBuffer,
4282 int sessionId,
4283 IAudioFlinger::track_flags_t flags)
4284 : TrackBase(thread, client, sampleRate, format, channelMask, frameCount, sharedBuffer, sessionId),
可以看出track继承了TrackBase,所以看TrackBase的类定义:
343 class ThreadBase : public Thread {
..............................
sp<IMemory> getCblk() const { return mCblkMemory; }
audio_track_cblk_t* cblk() const { return mCblk; }
..............................
sp<IMemory> mCblkMemory;
.............................
}
可以看到getCblk函数了,就是返回IMemory,IMemory是匿名共享内存的一块地址,看看IMemory的初始化地方:
4165 } else {
4166 mCblk = (audio_track_cblk_t *)(new uint8_t[size]);
4167 // construct the shared structure in-place.
4168 new(mCblk) audio_track_cblk_t();
4169 // clear all buffers
4170 mCblk->frameCount = frameCount;
4171 mCblk->sampleRate = sampleRate;
4172 // uncomment the following lines to quickly test 32-bit wraparound
4173 // mCblk->user = 0xffff0000;
4174 // mCblk->server = 0xffff0000;
4175 // mCblk->userBase = 0xffff0000;
4176 // mCblk->serverBase = 0xffff0000;
4177 mChannelCount = channelCount;
4178 mChannelMask = channelMask;
4179 mBuffer = (char*)mCblk + sizeof(audio_track_cblk_t);
4180 memset(mBuffer, 0, frameCount*channelCount*sizeof(int16_t));
4181 // Force underrun condition to avoid false underrun callback until first data is
4182 // written to buffer (other flags are cleared)
4183 mCblk->flags = CBLK_UNDERRUN_ON;
4184 mBufferEnd = (uint8_t *)mBuffer + bufferSize;
4185 }
4186 }
看4168行的new,这就是C++语法中的placement new。干啥用的啊?new后面的括号中是一块buffer,再后面是一个类的构造函数。对了,这个placement new的意思就是在这块buffer中构
造一个对象。我们之前的普通new是没法让一个对象在某块指定的内存中创建的。而placement new却可以。这样不就达到我们的目的了吗?搞一块共享内存,再在这块内存上创建一个对
象。这样,这个对象不也就能在两个内存中共享了吗?太牛牛牛牛牛了。怎么想到的?
这里其实可以理解为audioflinger为AudioTrack创建了一块共享内存,把这块内存看成一个FIFO(audio_track_cblk_t)就好了,这样AudioTrack往里面放数据,而audioflinger从里
面取数据记过Mixer后输出到设备上。
然后回头继续createTrack_l,得到了指向IMemory指针后,设置音量等一些参数。
分析完ToneGenerator.cpp的ToneGenerator::initAudioTrack,接着就是ToneGenerator::startTone来播放音频了:
881 bool ToneGenerator::startTone(tone_type toneType, int durationMs) {
882 bool lResult = false;
883 status_t lStatus;
884
885 if ((toneType < 0) || (toneType >= NUM_TONES))
886 return lResult;
887
888 if (mState == TONE_IDLE) {
889 ALOGV("startTone: try to re-init AudioTrack");
890 if (!initAudioTrack()) {
891 return lResult;
892 }
893 }
..........................................................
916 if (mState == TONE_INIT) {
917 if (prepareWave()) {
918 ALOGV("Immediate start, time %d", (unsigned int)(systemTime()/1000000));
919 lResult = true;
920 mState = TONE_STARTING;
921 mLock.unlock();
922 mpAudioTrack->start();
923 mLock.lock();
924 if (mState == TONE_STARTING) {
925 ALOGV("Wait for start callback");
926 lStatus = mWaitCbkCond.waitRelative(mLock, seconds(3));
927 if (lStatus != NO_ERROR) {
928 ALOGE("--- Immediate start timed out, status %d", lStatus);
929 mState = TONE_IDLE;
930 lResult = false;
931 }
932 }
933 } else {
934 mState = TONE_IDLE;
935 }
936 } else {
937 ALOGV("Delayed start");
938 mState = TONE_RESTARTING;
939 lStatus = mWaitCbkCond.waitRelative(mLock, seconds(3));
940 if (lStatus == NO_ERROR) {
941 if (mState != TONE_IDLE) {
942 lResult = true;
943 }
944 ALOGV("cond received");
917行是调整音频输出频率从而得到各种音效,922行mpAudioTrack->start()开始播放,直接调用了AudioTrack的start方法:
367 void AudioTrack::start()
368 {
369 sp<AudioTrackThread> t = mAudioTrackThread;
370 status_t status = NO_ERROR;
371
372 ALOGV("start %p", this);
373
374 AutoMutex lock(mLock);
375 // acquire a strong reference on the IMemory and IAudioTrack so that they cannot be destroyed
376 // while we are accessing the cblk
377 sp<IAudioTrack> audioTrack = mAudioTrack;
378 sp<IMemory> iMem = mCblkMemory;
379 audio_track_cblk_t* cblk = mCblk;
380
381 if (!mActive) {
382 mFlushed = false;
383 mActive = true;
384 mNewPosition = cblk->server + mUpdatePeriod;
385 cblk->lock.lock();
386 cblk->bufferTimeoutMs = MAX_STARTUP_TIMEOUT_MS;
387 cblk->waitTimeMs = 0;
388 android_atomic_and(~CBLK_DISABLED_ON, &cblk->flags);
389 if (t != 0) {
390 t->resume();
391 } else {
392 mPreviousPriority = getpriority(PRIO_PROCESS, 0);
393 get_sched_policy(0, &mPreviousSchedulingGroup);
394 androidSetThreadPriority(0, ANDROID_PRIORITY_AUDIO);
395 }
396
397 ALOGV("start %p before lock cblk %p", this, mCblk);
398 if (!(cblk->flags & CBLK_INVALID_MSK)) {
399 cblk->lock.unlock();
400 ALOGV("mAudioTrack->start()");
401 status = mAudioTrack->start();
402 cblk->lock.lock();
403 if (status == DEAD_OBJECT) {
404 android_atomic_or(CBLK_INVALID_ON, &cblk->flags);
405 }
406 }
407 if (cblk->flags & CBLK_INVALID_MSK) {
408 status = restoreTrack_l(cblk, true);
409 }
410 cblk->lock.unlock();
411 if (status != NO_ERROR) {
412 ALOGV("start() failed");
413 mActive = false;
414 if (t != 0) {
415 t->pause();
416 } else {
417 setpriority(PRIO_PROCESS, 0, mPreviousPriority);
418 set_sched_policy(0, mPreviousSchedulingGroup);
419 }
420 }
421 }
422
423 }
看401行mAudioTrack->start(),这里的mAudioTrack就是调用audioFlinger->createTrack返回的track,也就是控制track的一个handle,所以进入
AudioFlinger::PlaybackThread::Track::start:
4539 status_t AudioFlinger::PlaybackThread::Track::start(AudioSystem::sync_event_t event,
4540 int triggerSession)
4541 {
4542 status_t status = NO_ERROR;
4543 ALOGV("start(%d), calling pid %d session %d",
4544 mName, IPCThreadState::self()->getCallingPid(), mSessionId);
4545
4546 sp<ThreadBase> thread = mThread.promote();
4547 if (thread != 0) {
4548 Mutex::Autolock _l(thread->mLock);
4549 track_state state = mState;
4550 // here the track could be either new, or restarted
4551 // in both cases "unstop" the track
4552 if (mState == PAUSED) {
4553 mState = TrackBase::RESUMING;
4554 ALOGV("PAUSED => RESUMING (%d) on thread %p", mName, this);
4555 } else {
4556 mState = TrackBase::ACTIVE;
4557 ALOGV("? => ACTIVE (%d) on thread %p", mName, this);
4558 }
4559
4560 if (!isOutputTrack() && state != ACTIVE && state != RESUMING) {
4561 thread->mLock.unlock();
4562 status = AudioSystem::startOutput(thread->id(), mStreamType, mSessionId);
4563 thread->mLock.lock();
4564
4565 #ifdef ADD_BATTERY_DATA
4566 // to track the speaker usage
4567 if (status == NO_ERROR) {
4568 addBatteryData(IMediaPlayerService::kBatteryDataAudioFlingerStart);
4569 }
4570 #endif
4571 }
4572 if (status == NO_ERROR) {
4573 PlaybackThread *playbackThread = (PlaybackThread *)thread.get();
4574 playbackThread->addTrack_l(this);
4575 } else {
4576 mState = state;
4577 triggerEvents(AudioSystem::SYNC_EVENT_PRESENTATION_COMPLETE);
4578 }
4579 } else {
4580 status = BAD_VALUE;
4581 }
4582 return status;
4583 }
4573行,取出之前保存的playbackThread,4574不是已经保存在vetor中了吗,怎么又要addTrack_l添加到别的地方去了,进去看看:
1888 status_t AudioFlinger::PlaybackThread::addTrack_l(const sp<Track>& track)
1889 {
1890 status_t status = ALREADY_EXISTS;
1891
1892 // set retry count for buffer fill
1893 track->mRetryCount = kMaxTrackStartupRetries;
1894 if (mActiveTracks.indexOf(track) < 0) {
1895 // the track is newly added, make sure it fills up all its
1896 // buffers before playing. This is to ensure the client will
1897 // effectively get the latency it requested.
1898 track->mFillingUpStatus = Track::FS_FILLING;
1899 track->mResetDone = false;
1900 track->mPresentationCompleteFrames = 0;
1901 mActiveTracks.add(track);
1902 if (track->mainBuffer() != mMixBuffer) {
1903 sp<EffectChain> chain = getEffectChain_l(track->sessionId());
1904 if (chain != 0) {
1905 ALOGV("addTrack_l() starting track on chain %p for session %d", chain.get(), track->sessionId());
1906 chain->incActiveTrackCnt();
1907 }
1908 }
1909
1910 status = NO_ERROR;
1911 }
1912
1913 ALOGV("mWaitWorkCV.broadcast");
1914 mWaitWorkCV.broadcast();
1915
1916 return status;
1917 }
1894行,先判断mActiveTracks有没有该track,没有的话则1901行把它加进来,看意思就知道这是一个活跃Track的数组,意思就是说这里的track都要干活啦
1914行,mWaitWorkCV.broadcast这是一个广播,通知谁呢?通知MixerThread(或者)线程开始干活了
2505 bool AudioFlinger::PlaybackThread::threadLoop()
2506 {
2507 Vector< sp<Track> > tracksToRemove;
2508
2509 standbyTime = systemTime();
2533 while (!exitPending())
2534 {
2535 cpuStats.sample(myName);
2536
2537 Vector< sp<EffectChain> > effectChains;
2538
2539 processConfigEvents();
2540
2541 { // scope for mLock
2542
2543 Mutex::Autolock _l(mLock);
2544
2545 if (checkForNewParameters_l()) {
2546 cacheParameters_l();
2547 }
2548
2549 saveOutputTracks();
应用层播放audiotrack的几个步骤:
1. new AudioTrack() --> 对应JNI native_setup
2. audio.play(); -->对应JNI native_start();
byte[] buffer = new buffer[4096];
3. audio.write(buffer, 0, 4096); --> 对应JNI native_write_byte
4. audio.stop();
5. audio.release();
1.android_media_AudioTrack.cpp中native_setup,主要的函数是lpTrack = new AudioTrack(),lpTrack->set(),而在set函数中,主要的函数是AudioSystem::getOutput(),
new AudioTrackThread(),下面开始分析:
AudioTrack.cpp:
180 status_t AudioTrack::set( ----------> audio_io_handle_t output = AudioSystem::getOutput(
AudioSystem.cpp:
audio_io_handle_t AudioSystem::getOutput( --------> return aps->getOutput(stream, samplingRate, format, channels, flags);
AudioPolicyService.cpp
audio_io_handle_t AudioPolicyService::getOutput( ---------> return mpAudioPolicy->get_output(mpAudioPolicy, stream, samplingRate, format,
channels, flags);
hardware/libhardware_legacy/audio/audio_policy_hal.cpp
lap->policy.get_output = ap_get_output; -----------> return lap->apm->getOutput((AudioSystem::stream_type)stream,
AudioPolicyManagerBase.cpp
audio_io_handle_t AudioPolicyManagerBase::getOutput(AudioSystem::stream_type stream, ------------> mTestOutputs[mCurOutput] = mpClientInterface-
>openOutput(0, &outputDesc->mDevice, -----------> mpClientInterface = clientInterface; ----------------->
AudioPolicyManagerBase::AudioPolicyManagerBase(AudioPolicyClientInterface *clientInterface)
这里,在AudioPolicyManagerDefault.h中:
25 class AudioPolicyManagerDefault: public AudioPolicyManagerBase
26 {
27
28 public:
29 AudioPolicyManagerDefault(AudioPolicyClientInterface *clientInterface)
30 : AudioPolicyManagerBase(clientInterface) {}
31
32 virtual ~AudioPolicyManagerDefault() {}
33
34 };
35 };
可见AudioPolicyManagerBase被AudioPolicyManagerDefault继承了,所以,看AudioPolicyManagerDefault构造函数:
AudioPolicyManagerDefault.cpp
24 extern "C" AudioPolicyInterface* createAudioPolicyManager(AudioPolicyClientInterface *clientInterface)
25 {
26 return new AudioPolicyManagerDefault(clientInterface);
27 }
接着看createAudioPolicyManager构造函数:
audio_policy_hal.cpp
lap->apm = createAudioPolicyManager(lap->service_client); ---------------> 359 lap->service_client = new AudioPolicyCompatClient(aps_ops, service);
AudioPolicyCompatClient的构造函数在AudioPolicyCompatClient.h中:
34 AudioPolicyCompatClient(struct audio_policy_service_ops *serviceOps,
35 void *service) :
36 mServiceOps(serviceOps) , mService(service) {}
...................................
struct audio_policy_service_ops* mServiceOps;
...............................
}
所以调用的是AudioPolicyCompatClient.cpp中的openOutput方法:
38 audio_io_handle_t AudioPolicyCompatClient::openOutput(audio_module_handle_t module,
39 audio_devices_t *pDevices,
40 uint32_t *pSamplingRate,
41 audio_format_t *pFormat,
42 audio_channel_mask_t *pChannelMask,
43 uint32_t *pLatencyMs,
44 audio_output_flags_t flags)
45 {
46 return mServiceOps->open_output_on_module(mService, module, pDevices, pSamplingRate,
47 pFormat, pChannelMask, pLatencyMs,
48 flags);
49 }
这里mServiceOps就是在前面AudioPolicyCompatClient.h的构造函数中赋值,所以回到audio_policy_hal.cpp中:
311 static int create_legacy_ap(const struct audio_policy_device *device,
312 struct audio_policy_service_ops *aps_ops,
313 void *service,
314 struct audio_policy **ap)
就是第2个service参数,
dev->device.create_audio_policy = create_legacy_ap;
而create_audio_policy是在AudioPolicyService.cpp的构造函数中被调用的:
83 rc = mpAudioPolicyDev->create_audio_policy(mpAudioPolicyDev, &aps_ops, this,
84 &mpAudioPolicy);
这里传进去的是aps_ops引用,它定义在AudioPolicyService中:
1537 namespace {
1538 struct audio_policy_service_ops aps_ops = {
1539 open_output : aps_open_output,
1540 open_duplicate_output : aps_open_dup_output,
1541 close_output : aps_close_output,
1542 suspend_output : aps_suspend_output,
1543 restore_output : aps_restore_output,
1544 open_input : aps_open_input,
1545 close_input : aps_close_input,
1546 set_stream_volume : aps_set_stream_volume,
1547 set_stream_output : aps_set_stream_output,
1548 set_parameters : aps_set_parameters,
1549 get_parameters : aps_get_parameters,
1550 start_tone : aps_start_tone,
1551 stop_tone : aps_stop_tone,
1552 set_voice_volume : aps_set_voice_volume,
1553 move_effects : aps_move_effects,
1554 load_hw_module : aps_load_hw_module,
1555 open_output_on_module : aps_open_output_on_module,
1556 open_input_on_module : aps_open_input_on_module,
1557 };
1558 }; // namespace <unnamed>
所以调用了aps_open_output_on_module:
1378 return af->openOutput(module, pDevices, pSamplingRate, pFormat, pChannelMask,
1379 pLatencyMs, flags);
最后调用了AudioFlinger.cpp的openOutput,这个函数前面已经分析过了
紧接着AudioTrack::set中调用完getOutput后,执行创建线程并开始运行它:
275 if (cbf != NULL) {
276 mAudioTrackThread = new AudioTrackThread(*this, threadCanCallJava);
277 mAudioTrackThread->run("AudioTrack", ANDROID_PRIORITY_AUDIO, 0 /*stack*/);
278 }
1459 bool AudioTrack::AudioTrackThread::threadLoop()
1460 {
1461 {
1462 AutoMutex _l(mMyLock);
1463 if (mPaused) {
1464 mMyCond.wait(mMyLock);
1465 // caller will check for exitPending()
1466 return true;
1467 }
1468 }
1469 if (!mReceiver.processAudioBuffer(this)) {
1470 pause();
1471 }
1472 return true;
1473 }
这里1463行mPaused在AudioTrackThread构造函数中默认初始化为true, 所以进入wait等待信号唤醒它
这样,我们在client端有了线程,service端也有了线程;接下来要做的就是client端不断的write数据,而service端不断的read数据。
接着:
281 status_t status = createTrack_l(streamType,
282 sampleRate,
283 format,
284 (uint32_t)channelMask,
285 frameCount,
286 flags,
287 sharedBuffer,
288 output);
在该函数中调用------------->audioFlinger->createTrack,这在前面已经详细分析过它的binder过程了
所以接着往下:
890 sp<IMemory> cblk = track->getCblk();
跨进程调用AudioFlinger.cpp中getCblk:
5767 sp<IMemory> AudioFlinger::TrackHandle::getCblk() const {
5768 return mTrack->getCblk();
5769 }
mTrack就是前面new Track()得到的,可是在track类中并没有看到Track::getCblk,所以可能是它的父类方法,看看他的继承关系:
class Track : public TrackBase, public VolumeProvider {
.....................................
}
而:
class TrackBase : public ExtendedAudioBufferProvider, public RefBase {
................................
sp<IMemory> getCblk() const { return mCblkMemory; }
................................
sp<IMemory> mCblkMemory;
}
mCblkMemory是在哪里赋值了啊?是在前面new Track()中构造了父类的TrackBase构造函数:
4099 AudioFlinger::ThreadBase::TrackBase::TrackBase(
..................................................
4133 if (client != NULL) {
4134 mCblkMemory = client->heap()->allocate(size);
4135 if (mCblkMemory != 0) {
4136 mCblk = static_cast<audio_track_cblk_t *>(mCblkMemory->pointer());
4137 if (mCblk != NULL) { // construct the shared structure in-place.
4138 new(mCblk) audio_track_cblk_t();
4139 // clear all buffers
4140 mCblk->frameCount = frameCount;
4141 mCblk->sampleRate = sampleRate;
4142 // uncomment the following lines to quickly test 32-bit wraparound
4143 // mCblk->user = 0xffff0000;
4144 // mCblk->server = 0xffff0000;
4145 // mCblk->userBase = 0xffff0000;
4146 // mCblk->serverBase = 0xffff0000;
4147 mChannelCount = channelCount;
4148 mChannelMask = channelMask;
4149 if (sharedBuffer == 0) {
4150 mBuffer = (char*)mCblk + sizeof(audio_track_cblk_t);
4151 memset(mBuffer, 0, frameCount*channelCount*sizeof(int16_t));
4152 // Force underrun condition to avoid false underrun callback until first data is
4153 // written to buffer (other flags are cleared)
4154 mCblk->flags = CBLK_UNDERRUN_ON;
4155 } else {
4156 mBuffer = sharedBuffer->pointer();
4157 }
4158 mBufferEnd = (uint8_t *)mBuffer + bufferSize;
4159 }
.......................................
}
4134行的client在AudioFlinger::createTrack中已经分析过。
所以这里getCblk最终返回1438行的mCblk,这个前面返回BpAudioTrack一样,也是返回了一个binder引用,即这里interface_cast<IMemory>(reply.readStrongBinder());可见
是一个BpMemory代理啊,来和匿名共享内存通讯的
哇,终于讲完set函数了
2. 接着,第二步,分析JNI的native_start(),调用的是AudioTrack::start(),在start函数中,调用mAudioTrack->start(),调用mAudioTrack就是前面返回的TrackHandle远程代
理所以进入audioflinger中看看:
5771 status_t AudioFlinger::TrackHandle::start() {
5772 return mTrack->start();
5773 }
果然自己不干活,交给了track去做,所以进入PlaybackThread::Track::start中:
4573 PlaybackThread *playbackThread = (PlaybackThread *)thread.get();
4574 playbackThread->addTrack_l(this);
它主要就是做了上面两件事情,获得当前的线程,然后将它加入到活跃组中去,addTrack前面已经分析过啦,它唤醒了playbackThread线程,可是这时候playbackThread线程虽然运行
了,但还没有数据过来,所以继续往下。
3. 到了第三步,分析JNI的native_write_byte(),audiotrack端会不断赋值数据,这样,audiotrack的threadloop中终于可以工作了:
1459 bool AudioTrack::AudioTrackThread::threadLoop()
1460 {
1461 {
1462 AutoMutex _l(mMyLock);
1463 if (mPaused) {
1464 mMyCond.wait(mMyLock);
1465 // caller will check for exitPending()
1466 return true;
1467 }
1468 }
1469 if (!mReceiver.processAudioBuffer(this)) {
1470 pause();
1471 }
1472 return true;
1473 }
在processAudioBuffer函数主要用memcpy_to_i16_from_u8函数将数据拷贝到共享内存中;而在audiofinger的playbackThread threadloop中就开始接受数据了,进入
AudioFlinger::PlaybackThread::threadLoop()看它的实现:
首先checkForNewParameters_l,检查当前的配置,看是否有更新,比如这时候打开了蓝牙等等;
接着checkSilentMode_l(),它会检查时候开启了静音模式,检查的方法是property_get("ro.audio.silent", value, "0");
接着prepareTracks_l,这涉及到音频信号处理的知识,我只能说函数很长而且异常复杂,不影响我们分析;
接着threadLoop_mix(),它在AudioMixer.cpp中:
602 void AudioMixer::process(int64_t pts)
603 {
604 mState.hook(&mState, pts);
605 }
是一个回调函数啊,具体就不分析了,想知道的话可以看这个类的构造函数,总之,它可能有的值是:
process__validate
process__nop
process__genericNoResampling
process__genericResampling
process__OneTrack16BitsStereoNoResampling
process__TwoTracks16BitsStereoNoResampling
就是在这写函数中获取audiotrack写过来的数据的,并且进行了音频处理,很复杂啊,看不懂
接着threadLoop_write(),它有调用PlaybackThread::threadLoop_write():
bytesWritten = (int)mOutput->stream->write(mOutput->stream, mMixBuffer, mixBufferSize),终于把数据写到output中去了,看到这,我的心也碎了!!!