Android Audio代码分析2 - 函数getMinBufferSize

AudioTrack的使用示例中,用到了函数getMinBufferSize,今天把它倒出来,再嚼嚼。


*****************************************源码*************************************************
 static public int getMinBufferSize(int sampleRateInHz, int channelConfig, int audioFormat) {
        int channelCount = 0;
        switch(channelConfig) {
        case AudioFormat.CHANNEL_OUT_MONO:
        case AudioFormat.CHANNEL_CONFIGURATION_MONO:
            channelCount = 1;
            break;
        case AudioFormat.CHANNEL_OUT_STEREO:
        case AudioFormat.CHANNEL_CONFIGURATION_STEREO:
            channelCount = 2;
            break;
        default:
            loge("getMinBufferSize(): Invalid channel configuration.");
            return AudioTrack.ERROR_BAD_VALUE;
        }
        
        if ((audioFormat != AudioFormat.ENCODING_PCM_16BIT) 
            && (audioFormat != AudioFormat.ENCODING_PCM_8BIT)) {
            loge("getMinBufferSize(): Invalid audio format.");
            return AudioTrack.ERROR_BAD_VALUE;
        }
        
        if ( (sampleRateInHz < 4000) || (sampleRateInHz > 48000) ) {
            loge("getMinBufferSize(): " + sampleRateInHz +"Hz is not a supported sample rate.");
            return AudioTrack.ERROR_BAD_VALUE;
        }
        
        int size = native_get_min_buff_size(sampleRateInHz, channelCount, audioFormat);
        if ((size == -1) || (size == 0)) {
            loge("getMinBufferSize(): error querying hardware");
            return AudioTrack.ERROR;
        }
        else {
            return size;
        }
    }


***********************************************************************************************
源码路径:
frameworks\base\media\java\android\media\AudioTrack.java


###########################################说明##############################################################
先把自带的注释拿来看看吧:
    /**
     * Returns the minimum buffer size required for the successful creation of an AudioTrack
     * object to be created in the {@link #MODE_STREAM} mode. Note that this size doesn't
     * guarantee a smooth playback under load, and higher values should be chosen according to
     * the expected frequency at which the buffer will be refilled with additional data to play. 
     * @param sampleRateInHz the sample rate expressed in Hertz.
     * @param channelConfig describes the configuration of the audio channels. 
     *   See {@link AudioFormat#CHANNEL_OUT_MONO} and
     *   {@link AudioFormat#CHANNEL_OUT_STEREO}
     * @param audioFormat the format in which the audio data is represented. 
     *   See {@link AudioFormat#ENCODING_PCM_16BIT} and 
     *   {@link AudioFormat#ENCODING_PCM_8BIT}
     * @return {@link #ERROR_BAD_VALUE} if an invalid parameter was passed,
     *   or {@link #ERROR} if the implementation was unable to query the hardware for its output 
     *     properties, 
     *   or the minimum buffer size expressed in bytes.
     */


从注释可以看出,通过该函数获取的最小buffer size,只是保证在MODE_STREAM模式下成功地创建一个AudioTrack对象。
并不能保证流畅地播放。


1、参数就不说了,可以参考上面注释,上一篇文章中也有说。
2、定义了一个内部变量:
     int channelCount = 0;
   用来记录声道数量。
   调用native函数native_get_min_buff_size时会用。
   可见buffer size也是由native层来决定的。
3、接下来根据Channel类型,计算声道数量:
        switch(channelConfig) {
        case AudioFormat.CHANNEL_OUT_MONO:
        case AudioFormat.CHANNEL_CONFIGURATION_MONO:
            channelCount = 1;
            break;
        case AudioFormat.CHANNEL_OUT_STEREO:
        case AudioFormat.CHANNEL_CONFIGURATION_STEREO:
            channelCount = 2;
            break;
        default:
            loge("getMinBufferSize(): Invalid channel configuration.");
            return AudioTrack.ERROR_BAD_VALUE;
        }


   MONO都是1,Stereo的都是2。
   不过,我们之前看过,Channel类型不止这几种。有以下一堆呢:
    public static final int CHANNEL_OUT_FRONT_LEFT = 0x4;
    public static final int CHANNEL_OUT_FRONT_RIGHT = 0x8;
    public static final int CHANNEL_OUT_FRONT_CENTER = 0x10;
    public static final int CHANNEL_OUT_LOW_FREQUENCY = 0x20;
    public static final int CHANNEL_OUT_BACK_LEFT = 0x40;
    public static final int CHANNEL_OUT_BACK_RIGHT = 0x80;
    public static final int CHANNEL_OUT_FRONT_LEFT_OF_CENTER = 0x100;
    public static final int CHANNEL_OUT_FRONT_RIGHT_OF_CENTER = 0x200;
    public static final int CHANNEL_OUT_BACK_CENTER = 0x400;
    public static final int CHANNEL_OUT_MONO = CHANNEL_OUT_FRONT_LEFT;
    public static final int CHANNEL_OUT_STEREO = (CHANNEL_OUT_FRONT_LEFT | CHANNEL_OUT_FRONT_RIGHT);
    public static final int CHANNEL_OUT_QUAD = (CHANNEL_OUT_FRONT_LEFT | CHANNEL_OUT_FRONT_RIGHT |
            CHANNEL_OUT_BACK_LEFT | CHANNEL_OUT_BACK_RIGHT);
    public static final int CHANNEL_OUT_SURROUND = (CHANNEL_OUT_FRONT_LEFT | CHANNEL_OUT_FRONT_RIGHT |
            CHANNEL_OUT_FRONT_CENTER | CHANNEL_OUT_BACK_CENTER);
    public static final int CHANNEL_OUT_5POINT1 = (CHANNEL_OUT_FRONT_LEFT | CHANNEL_OUT_FRONT_RIGHT |
            CHANNEL_OUT_FRONT_CENTER | CHANNEL_OUT_LOW_FREQUENCY | CHANNEL_OUT_BACK_LEFT | CHANNEL_OUT_BACK_RIGHT);
    public static final int CHANNEL_OUT_7POINT1 = (CHANNEL_OUT_FRONT_LEFT | CHANNEL_OUT_FRONT_RIGHT |
            CHANNEL_OUT_FRONT_CENTER | CHANNEL_OUT_LOW_FREQUENCY | CHANNEL_OUT_BACK_LEFT | CHANNEL_OUT_BACK_RIGHT |
            CHANNEL_OUT_FRONT_LEFT_OF_CENTER | CHANNEL_OUT_FRONT_RIGHT_OF_CENTER);



   并且,AudioFormat.CHANNEL_CONFIGURATION_MONO和AudioFormat.CHANNEL_CONFIGURATION_STEREO的定义还不包含在这一堆之中,而是在它们之前定义:
    /** Mono audio configuration */
    /** @deprecated use CHANNEL_OUT_MONO or CHANNEL_IN_MONO instead  */
    @Deprecated    public static final int CHANNEL_CONFIGURATION_MONO      = 2;
    /** Stereo (2 channel) audio configuration */
    /** @deprecated use CHANNEL_OUT_STEREO or CHANNEL_IN_STEREO instead  */
    @Deprecated    public static final int CHANNEL_CONFIGURATION_STEREO    = 3;



   难道其他的Channel类型都不需要获取这个min buffer size???
   还是说,目前只支持单声道和双声道???


4、下面判断音频格式,即采样点数据所占的bit数:
        if ((audioFormat != AudioFormat.ENCODING_PCM_16BIT) 
            && (audioFormat != AudioFormat.ENCODING_PCM_8BIT)) {
            loge("getMinBufferSize(): Invalid audio format.");
            return AudioTrack.ERROR_BAD_VALUE;
        }


   可见,只支持16bit和8bit两种。


5、判断采用率:
        if ( (sampleRateInHz < 4000) || (sampleRateInHz > 48000) ) {
            loge("getMinBufferSize(): " + sampleRateInHz +"Hz is not a supported sample rate.");
            return AudioTrack.ERROR_BAD_VALUE;
        }


   只支持4000Hz到48000Hz之间。


6、接下来调到native中去:
        int size = native_get_min_buff_size(sampleRateInHz, channelCount, audioFormat);
        if ((size == -1) || (size == 0)) {
            loge("getMinBufferSize(): error querying hardware");
            return AudioTrack.ERROR;
        }
        else {
            return size;
        }


   可见,真正干活的是在native中,java层中只是做些辅助操作。


   通过前文中JNI的函数对照表,可知native_get_min_buff_size函数对应的是native中的android_media_AudioTrack_get_min_buff_size函数。
   路径:frameworks\base\core\jni\android_media_AudioTrack.cpp


   函数android_media_AudioTrack_get_min_buff_size的实现:
// returns the minimum required size for the successful creation of a streaming AudioTrack
// returns -1 if there was an error querying the hardware.
static jint android_media_AudioTrack_get_min_buff_size(JNIEnv *env,  jobject thiz,
    jint sampleRateInHertz, jint nbChannels, jint audioFormat) {


    int frameCount = 0;
    if (AudioTrack::getMinFrameCount(&frameCount, AudioSystem::DEFAULT,
            sampleRateInHertz) != NO_ERROR) {
        return -1;
    }
    return frameCount * nbChannels * (audioFormat == javaAudioTrackFields.PCM16 ? 2 : 1);
}




   可见,最小buffer size是frameCoun乘以声道个数,在根据音频格式乘以1或2得到。
   声道个数和音频格式都是传入的,不再说。
   frameCount是调用函数AudioTrack::getMinFrameCount取得的。从函数名可知,此处取得的应该是最小frame数。
   传入的三个参数:
     &frameCount是用来保存frame计数的。
     sampleRateInHertz是采样率。
     AudioSystem::DEFAULT是写死的。其定义在类AudioSystem中,其他的定义如下:
    enum stream_type {
        DEFAULT          =-1,
        VOICE_CALL       = 0,
        SYSTEM           = 1,
        RING             = 2,
        MUSIC            = 3,
        ALARM            = 4,
        NOTIFICATION     = 5,
        BLUETOOTH_SCO    = 6,
        ENFORCED_AUDIBLE = 7, // Sounds that cannot be muted by user and must be routed to speaker
        DTMF             = 8,
        TTS              = 9,
        NUM_STREAM_TYPES
    };




   原来是stream的类型。
   为什么不在调用getMinBufferSize的时候传入stream类型,而在此处使用DEFAULT呢???


   先放放,继续看函数AudioTrack::getMinFrameCount。


   函数AudioTrack::getMinFrameCount的实现:
status_t AudioTrack::getMinFrameCount(
        int* frameCount,
        int streamType,
        uint32_t sampleRate)
{
    int afSampleRate;
    if (AudioSystem::getOutputSamplingRate(&afSampleRate, streamType) != NO_ERROR) {
        return NO_INIT;
    }
    int afFrameCount;
    if (AudioSystem::getOutputFrameCount(&afFrameCount, streamType) != NO_ERROR) {
        return NO_INIT;
    }
    uint32_t afLatency;
    if (AudioSystem::getOutputLatency(&afLatency, streamType) != NO_ERROR) {
        return NO_INIT;
    }


    // Ensure that buffer depth covers at least audio hardware latency
    uint32_t minBufCount = afLatency / ((1000 * afFrameCount) / afSampleRate);
    if (minBufCount < 2) minBufCount = 2;


    *frameCount = (sampleRate == 0) ? afFrameCount * minBufCount :
              afFrameCount * minBufCount * sampleRate / afSampleRate;
    return NO_ERROR;
}



   开始,调用了三个AudioSystem的函数,似曾谋面,不过当时被无视了,今天看看吧。


   函数AudioSystem::getOutputSamplingRate的实现:
status_t AudioSystem::getOutputSamplingRate(int* samplingRate, int streamType)
{
    OutputDescriptor *outputDesc;
    audio_io_handle_t output;


    if (streamType == DEFAULT) {
        streamType = MUSIC;
    }


    output = getOutput((stream_type)streamType);
    if (output == 0) {
        return PERMISSION_DENIED;
    }


    gLock.lock();
    outputDesc = AudioSystem::gOutputs.valueFor(output);
    if (outputDesc == 0) {
        LOGV("getOutputSamplingRate() no output descriptor for output %d in gOutputs", output);
        gLock.unlock();
        const sp<IAudioFlinger>& af = AudioSystem::get_audio_flinger();
        if (af == 0) return PERMISSION_DENIED;
        *samplingRate = af->sampleRate(output);
    } else {
        LOGV("getOutputSamplingRate() reading from output desc");
        *samplingRate = outputDesc->samplingRate;
        gLock.unlock();
    }


    LOGV("getOutputSamplingRate() streamType %d, output %d, sampling rate %d", streamType, output, *samplingRate);


    return NO_ERROR;
}




   判断流的类型,如果是DEFAULT,将其设置为MUSIC!
   纳炉嚎啕!!!
   DEFAULT的流类型原来是这么用的。


   接下来根据stream type获取output。
   然后获取output的描述。


   若获取成功,则output描述中的采样率就是要获取的采样率。
   否则,尝试从AudioFlinger中获取采样率。


   函数AudioSystem::getOutputFrameCount,AudioSystem::getOutputLatency,与函数AudioSystem::getOutputSamplingRate的处理类似。


   至此,采样率,frameCount和延迟都取得了。
   
   接下来计算minBufCount:
    // Ensure that buffer depth covers at least audio hardware latency
    uint32_t minBufCount = afLatency / ((1000 * afFrameCount) / afSampleRate);
    if (minBufCount < 2) minBufCount = 2;



   从注释可知,buff大小应至少能覆盖audio 硬件的延迟。
   公式不太明白。
   先看看从链接:http://blog.csdn.net/innost/article/details/6125779
   中摘过来的frame的说明:
     一个frame就是1个采样点的字节数*声道。为啥搞个frame出来?因为对于多声道的话,用1个采样点的字节数表示不全,
     因为播放的时候肯定是多个声道的数据都要播出来才行。所以为了方便,就说1秒钟有多少个frame,这样就能抛开声道数,把意思表示全了。


   还不是很明白。先放放。
   猜了半天也猜不出来。哪位大侠指点指点。


   下面计算frameCount:
         *frameCount = (sampleRate == 0) ? afFrameCount * minBufCount :
              afFrameCount * minBufCount * sampleRate / afSampleRate;




   我们的sampleRate肯定不为0,所以最后的计算应该为:afFrameCount * minBufCount * sampleRate / afSampleRate
  • 2
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值