Audio音频开发

一、录音

1、使用AudioRecord录音

可以使用 AudioRecord.Builder() 创建 AudioRecord 对象

private AudioRecord.Builder mAudioRecord = new AudioRecord.Builder()
        .setAudioSource(MediaRecorder.AudioSource.VOICE_COMMUNICATION)
        .setAudioFormat(audioFormat)
        .setBufferSizeInBytes(Config.AUDIO_CONFIG.READ_AUDIO_BUFFER_SIZE_BY_BYTES)
        .build();

1.1、setAudioSource

setAudioSource() 用来设置被录音的音频源类型

 1.2、setAudioFormat

AudioFormat是我们配置录音参数的主要类,可以用 AudioFormat.Builder() 创建 AudioFormat 的对象

private AudioFormat.Builder audioFormat = new AudioFormat.Builder()
        .setEncoding(Config.AUDIO_CONFIG.ENCODING_PCM)
        .setSampleRate(Config.AUDIO_CONFIG.SAMPLE_RATE_IN_HZ)
        .setChannelMask(Config.AUDIO_CONFIG.RECORD_CHANNEL_CONFIG)
        .build();

1.2.1、setEncoding

setEncoding() 配置编码格式,AudioFormat中提供了很多的编码格式

1.2.2、setSampleRate

SampleRate 采样率,即每秒进行多少次采样

一般常见的采样率规格如下:

    Pair<Integer, Integer> SAMPLE_RATE_96000 = new Pair<>(0x00, 96000);
    Pair<Integer, Integer> SAMPLE_RATE_88200 = new Pair<>(0x01, 88200);
    Pair<Integer, Integer> SAMPLE_RATE_64000 = new Pair<>(0x02, 64000);
    Pair<Integer, Integer> SAMPLE_RATE_48000 = new Pair<>(0x03, 48000);
    Pair<Integer, Integer> SAMPLE_RATE_44100 = new Pair<>(0x04, 44100);
    Pair<Integer, Integer> SAMPLE_RATE_32000 = new Pair<>(0x05, 32000);
    Pair<Integer, Integer> SAMPLE_RATE_24000 = new Pair<>(0x06, 24000);
    Pair<Integer, Integer> SAMPLE_RATE_22050 = new Pair<>(0x07, 22050);
    Pair<Integer, Integer> SAMPLE_RATE_16000 = new Pair<>(0x08, 16000);
    Pair<Integer, Integer> SAMPLE_RATE_12000 = new Pair<>(0x09, 12000);
    Pair<Integer, Integer> SAMPLE_RATE_11025 = new Pair<>(0x0A, 11025);
    Pair<Integer, Integer> SAMPLE_RATE_8000 = new Pair<>(0x0B, 8000);

采样率是有最大最小值的,我们所给的参数需要再这个区间之内,

public static final int SAMPLE_RATE_HZ_MIN = AudioSystem.SAMPLE_RATE_HZ_MIN;
public static final int SAMPLE_RATE_HZ_MAX = AudioSystem.SAMPLE_RATE_HZ_MAX;
/** Minimum value for sample rate,
 *  assuming AudioTrack and AudioRecord share the same limitations.
 * @hide
 */
// never unhide
public static final int SAMPLE_RATE_HZ_MIN = 4000;
/** Maximum value for sample rate,
 *  assuming AudioTrack and AudioRecord share the same limitations.
 * @hide
 */
// never unhide
public static final int SAMPLE_RATE_HZ_MAX = 192000;
/** Sample rate will be a route-dependent value.
 * For AudioTrack, it is usually the sink sample rate,
 * and for AudioRecord it is usually the source sample rate.
 */
public static final int SAMPLE_RATE_UNSPECIFIED = 0;

如果不设置在该区间会报错:

public Builder setSampleRate(int sampleRate) throws IllegalArgumentException {
            if (((sampleRate < SAMPLE_RATE_HZ_MIN) || (sampleRate > SAMPLE_RATE_HZ_MAX)) &&
                    sampleRate != SAMPLE_RATE_UNSPECIFIED) {
                throw new IllegalArgumentException("Invalid sample rate " + sampleRate);
            }
            mSampleRate = sampleRate;
            mPropertySetMask |= AUDIO_FORMAT_HAS_PROPERTY_SAMPLE_RATE;
            return this;
        }

1.2.3、setChannelMask

ChannelMask 声道数,可以理解为声道数是和声音的立体感相关的,在设备支持的情况下声道数越多我们听到的声音越立体

public static final int CHANNEL_IN_DEFAULT = 1;
// These directly match native
public static final int CHANNEL_IN_LEFT = 0x4;
public static final int CHANNEL_IN_RIGHT = 0x8;
public static final int CHANNEL_IN_FRONT = 0x10;
public static final int CHANNEL_IN_BACK = 0x20;
public static final int CHANNEL_IN_LEFT_PROCESSED = 0x40;
public static final int CHANNEL_IN_RIGHT_PROCESSED = 0x80;
public static final int CHANNEL_IN_FRONT_PROCESSED = 0x100;
public static final int CHANNEL_IN_BACK_PROCESSED = 0x200;
public static final int CHANNEL_IN_PRESSURE = 0x400;
public static final int CHANNEL_IN_X_AXIS = 0x800;
public static final int CHANNEL_IN_Y_AXIS = 0x1000;
public static final int CHANNEL_IN_Z_AXIS = 0x2000;
public static final int CHANNEL_IN_VOICE_UPLINK = 0x4000;
public static final int CHANNEL_IN_VOICE_DNLINK = 0x8000;

public static final int CHANNEL_IN_MONO = CHANNEL_IN_FRONT;
public static final int CHANNEL_IN_STEREO = (CHANNEL_IN_LEFT | CHANNEL_IN_RIGHT);
/** @hide */
public static final int CHANNEL_IN_FRONT_BACK = CHANNEL_IN_FRONT | CHANNEL_IN_BACK;
public static final int CHANNEL_IN_7POINT1 = (CHANNEL_IN_LEFT | CHANNEL_IN_RIGHT
           | CHANNEL_IN_FRONT | CHANNEL_IN_BACK
           | CHANNEL_IN_LEFT_PROCESSED | CHANNEL_IN_RIGHT_PROCESSED
           | CHANNEL_IN_FRONT_PROCESSED | CHANNEL_IN_BACK_PROCESSED);
// CHANNEL_IN_ALL is not yet defined; if added then it should match AUDIO_CHANNEL_IN_ALL

其中,单声道和双声道

public static final int CHANNEL_IN_MONO = CHANNEL_IN_FRONT;
public static final int CHANNEL_IN_STEREO = (CHANNEL_IN_LEFT | CHANNEL_IN_RIGHT);

1.3、setBufferSizeInBytes

录音的时候,音频数据是以流的形式实时返回的,所以可以设置每次返回的 Buffer 大小,这个大小决定了返回间隙。

因为采样点、声道数、位深确定后,每秒产生的数据量就确定了,那么每次返回的数据块大小确定后,就可以估算出1秒钟会返回多少次,即可以知道单个音频数据块的返回间隙。

设置的 Buffer 大小不能小于 MinBufferSize 的大小,并且我们可以使用 getMinBufferSize 来获取这个大小

int AudioRecord.getMinBufferSize(int sampleRateInHz, int channelConfig, int audioFormat);

将之前已经确定的 采样率、声道数、数据格式设置进去之后就会得到该录音参数下的 MinBufferSize ,我们设置的 BufferSizeInBytes 大于 MinBufferSize 即可。

注:不同的音频参数下 MinBufferSize 是不同的!

2、开始录音

2.1、在AudioRecord内查看录音状态,包括 是否初始化了 和 是否处于录音中

/**
 *  indicates AudioRecord state is not successfully initialized.
 */
public static final int STATE_UNINITIALIZED = 0;
/**
 *  indicates AudioRecord state is ready to be used
 */
public static final int STATE_INITIALIZED   = 1;

/**
 * indicates AudioRecord recording state is not recording
 */
public static final int RECORDSTATE_STOPPED = 1;  // matches SL_RECORDSTATE_STOPPED
/**
 * indicates AudioRecord recording state is recording
 */
public static final int RECORDSTATE_RECORDING = 3;// matches SL_RECORDSTATE_RECORDING

录音之前先检查录音状态

int recordingState = mAudioRecord.getRecordingState();

2.2、开始录音

新开一个线程不断的从 AudioRecord 中读取音频数据

Thread thread = new Thread(new Runnable() {
        @Override
        public void run() {
            while (mAudioRecord.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING) {
                // 第一个参数 audioData – the array to which the recorded audio data is written.
                byte[] tempAudioData = new byte[Config.AUDIO_CONFIG.READ_AUDIO_BUFFER_SIZE_BY_BYTES];
                // offsetInBytes – index in audioData from which the data is written
                // 由于我们并没有做特殊的处理或者要求,所以第二个参数为0
                // 第三个参数就是我们声明tempAudioData 数组的长度
                int bufferReadResult = mAudioRecord.read(tempAudioData, 0, Config.AUDIO_CONFIG.READ_AUDIO_BUFFER_SIZE_BY_BYTES);
                if (bufferReadResult < 0) {
                    Log.w(TAG, "getRecordAndRTPSendRunnable bufferReadResult = " + bufferReadResult);
                    break;
                }
            }

        }
});

thread.start();

二、播音

三、编码与解码

1、编码

1.1、查看设备支持的编解码格式

主要使用 MediaCodec 进行编解码

下列方法会返回当前设备支持的编解码格式,以及是软编软解还是硬编硬解

public static void getSupportMediaType() {
    MediaCodecList mediaCodecList = new MediaCodecList(MediaCodecList.REGULAR_CODECS);
    MediaCodecInfo[] supportCodes = mediaCodecList.getCodecInfos();
    Log.d(TAG, "Support mediatypes:");
    for (MediaCodecInfo codec : supportCodes) {
        String name = codec.getName();
        Log.d(TAG, name +" "+ (name.startsWith("OMX.google") ? "软" : "硬")
            + (codec.isEncoder() ? "解" : "编"));
    }
}

1.2、Media codec AAC 编码

AAC(Advanced Audio Coding)是新一代的音频有损压缩技术;

AAC编码的文件扩展名主要有3种:

(1).acc:传统的AAC编码,使用MPEG-2 Audio Transport Stream(ADTS)容器;
(2).mp4:使用了MPEG-4 Part 14的简化版即3GPP Media Release 6 Basic(3gp6)进行封装的AAC编码;
(3).m4a:为了区别纯音频MP4文件和包含视频的MP4文件而由Apple公司使用的扩展名;

特点:
在小于128Kbit/s的码率下表现优异,并且多用于视频中的音频编码。
适用场合:
128Kbit/s以下的音频编码,多用于视频中音频轨的编码。

1.3、创建编码器

所谓的编码就是将PCM音频流变成另外一种数据格式,所以第一步就是要确定要进行的编码类型是什么:

try {
    mediaCodec = MediaCodec.createEncoderByType(MediaFormat.MIMETYPE_AUDIO_AAC);
} catch (IOException e) {
    e.printStackTrace();
}

MediaFormat 的编解码类型:

public static final String MIMETYPE_AUDIO_AMR_NB = "audio/3gpp";
public static final String MIMETYPE_AUDIO_AMR_WB = "audio/amr-wb";
public static final String MIMETYPE_AUDIO_MPEG = "audio/mpeg";
public static final String MIMETYPE_AUDIO_AAC = "audio/mp4a-latm";
public static final String MIMETYPE_AUDIO_QCELP = "audio/qcelp";
public static final String MIMETYPE_AUDIO_VORBIS = "audio/vorbis";
public static final String MIMETYPE_AUDIO_OPUS = "audio/opus";
public static final String MIMETYPE_AUDIO_G711_ALAW = "audio/g711-alaw";
public static final String MIMETYPE_AUDIO_G711_MLAW = "audio/g711-mlaw";
public static final String MIMETYPE_AUDIO_RAW = "audio/raw";
public static final String MIMETYPE_AUDIO_FLAC = "audio/flac";
public static final String MIMETYPE_AUDIO_MSGSM = "audio/gsm";
public static final String MIMETYPE_AUDIO_AC3 = "audio/ac3";
public static final String MIMETYPE_AUDIO_EAC3 = "audio/eac3";
public static final String MIMETYPE_AUDIO_EAC3_JOC = "audio/eac3-joc";
public static final String MIMETYPE_AUDIO_AC4 = "audio/ac4";
public static final String MIMETYPE_AUDIO_SCRAMBLED = "audio/scrambled";

1.4、配置解码器

public void configure(
    @Nullable MediaFormat format,
    @Nullable Surface surface, @Nullable MediaCrypto crypto,
    @ConfigureFlag int flags) {
    configure(format, surface, crypto, null, flags);
}

由于不涉及到 Surface 和 数据的加解密,所以这两参数传 null ;

如果传递代表的是编码器。则参数 flags 传递 MediaCodec.CONFIGURE_FLAG_ENCODE

1.4.1、MediaFormat 

/**
 * Creates a minimal audio format.
 * @param mime The mime type of the content.
 * @param sampleRate The sampling rate of the content.
 * @param channelCount The number of audio channels in the content.
 */
public static final @NonNull MediaFormat createAudioFormat(
        @NonNull String mime,
        int sampleRate,
        int channelCount) {
    MediaFormat format = new MediaFormat();
    format.setString(KEY_MIME, mime);
    format.setInteger(KEY_SAMPLE_RATE, sampleRate);
    format.setInteger(KEY_CHANNEL_COUNT, channelCount);
    return format;
}


/**
 * Creates an empty MediaFormat
 */
public MediaFormat() {
    mMap = new HashMap();
}

其中,MediaFormat 实际上是一个 HashMap 里面通过 key-value 的形式存储和编码相关的参数格式。

此处设置的 MediaFormat 为:

MediaFormat mediaFormat = MediaFormat.createAudioFormat(
                MediaFormat.MIMETYPE_AUDIO_AAC,
                Config.AUDIO_CONFIG.SAMPLE_RATE_IN_Hz,
                Config.CODEC_CANNEL_COUNT);

mediaFormat.setInteger(MediaFormat.KEY_AAC_PROFILE,         
    MediaCodecInfo.CodecProfileLevel.AACObjectLC);
mediaFormat.setInteger(MediaFormat.KEY_BIT_RATE, Config.AUDIO_CONFIG.BITRATES);
mediaFormat.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 
    Config.AUDIO_CONFIG.READ_AUDIO_BUFFER_SIZE_BY_BYTES);

其中,

MediaFormat.KEY_AAC_PROFILE
    只有使用AAC格式时才使用的字段,该字段用于描述AAC的级别,可以理解为AAC的实现算法。在 MediaCodecInfo.CodecProfileLevel 已经定义好了,所以直接引用即可。

MediaFormat.KEY_BIT_RATE
    比特率,即我们压缩后每秒产生的数据大小,bitrate in bits/sec。

MediaFormat.KEY_MAX_INPUT_SIZE
    Media codec 数据缓冲区的最大字节大小,这里涉及Media codec 的原理。

1.5、进行AAC解码

在正式编码前我们需要 启动编码器 和 新建一个 MediaCodec.BufferInfo

mediaCodec.start();
bufferInfo = new MediaCodec.BufferInfo();

然后编码

public byte[] offerEncoder(byte[] input);

1.5.1、Media codec 的工作原理

MediaCodec采用了2个缓冲区队列(inputBuffer和outputBuffer),异步处理数据,

1、数据生成方(左侧Client)从input缓冲队列申请empty buffer  -->  dequeueinputBuffer
2、数据生产方(左侧Client)把需要编解码的数据copy到empty buffer,然后放入到input缓冲队列  -->  queueInputBuffer
3、MediaCodec从input缓冲区队列中取一帧进行编解码处理
4、编解码处理结束后,MediaCodec将原始inputbuffer置为empty后放回左侧的input缓冲队列,将编解码后的数据放入到右侧output缓冲区队列
5、消费方 Client(右侧Client)从 output 缓冲区队列申请编解码后的 buffer  -->  dequeueOutputBuffer
6、消费方client(右侧Client)对编解码后的buffer进行渲染或者播放
7、渲染/播放完成后,消费方Client再将该buffer放回到output缓冲区队列  -->  releaseOutputBuffer

代码如下:

// dequeueInputBuffer(-1),参数表示需要等待的毫秒数
// -1 表示一直等
// 0 表示不需要等,传 0 的话程序不会等待,但是有可能会丢帧
// 正数 表示等待时间
int inputBufferIndex = mediaCodec.dequeueInputBuffer(-1);

if (inputBufferIndex >= 0) {
    ByteBuffer inputBuffer = mediaCodec.getInputBuffer(inputBufferIndex);
    inputBuffer.put(input);
    inputBuffer.limit(input.length);
    long pts = computePresentationTime(presentationTimeUs);
    mediaCodec.queueInputBuffer(inputBufferIndex, 0, input.length, pts, 0);
    presentationTimeUs += 1;
}
int outputBufferIndex = mediaCodec.dequeueOutputBuffer(bufferInfo, 0);
if (outputBufferIndex < 0) {
    Log.w(TAG, "offerDecoder dequeueOutputBuffer = " + outputBufferIndex);
}

while (outputBufferIndex >= 0) {
    int outBitsSize = bufferInfo.size;
    int outPacketSize = outBitsSize + 7;
    ByteBuffer outputBuffer = mediaCodec.getOutputBuffer(outputBufferIndex);
    outputBuffer.position(bufferInfo.offset);
    outputBuffer.limit(bufferInfo.offset + outBitsSize);
    //添加ADTS头
    byte[] outData = new byte[outPacketSize];
    addADTStoPacket(outData,
        outPacketSize,
        MediaCodecInfo.CodecProfileLevel.AACObjectLC,
        Config.AUDIO_CONFIG.SAMPLE_RATE_INDEX
        );
    outputBuffer.get(outData, 7, outBitsSize);
    outputBuffer.position(bufferInfo.offset);
    outputStream.write(outData);
    mediaCodec.releaseOutputBuffer(outputBufferIndex, false);
    outputBufferIndex = mediaCodec.dequeueOutputBuffer(bufferInfo, 0);
}

1.6、编码总结

创建 Media codec --> 配置 Media codec --> 启动 Media codec --> 向 Media codec申请可用的输入Buffer块 --> 对 Buffer块填充需要编码的数据 --> 提交 Buffer块 --> 申请完成编码的Buffer块 --> 处理原始的编码块 --> 返回处理完的编码块

2、解码(Media codec AAC 解码)

2.1、验证编码

例如 AAC编码,可以在其编码完成后存储到文件中,然后使用第三方的播放器进行播放,这样就预先验证了编码工作的正确性,验证通过后就可以放心的进行解码工作了;使用文件的方式验证编码结果其实还有另外一个好处,就是快速构建 MediaFormat

2.2、MediaExtractor

MediaExtractor 不仅可以分析音频文件的 MediaFormat ,还能分析例如 MP4 这样多种格式封装的文件。

MediaExtractor extractor = new MediaExtractor();//实例一个MediaExtractor
try {
    extractor.setDataSource(mFile.getAbsolutePath());//设置添加媒体文件路径
} catch (IOException e) {
    e.printStackTrace();
}
int count = extractor.getTrackCount();//获取轨道数量
Log.d(TAG, "轨道数量 = "+count);
for (int i = 0; i < count; i++){
    MediaFormat mediaFormat = extractor.getTrackFormat(i);
    Log.d(TAG, i+"编号通道格式 = "+mediaFormat.getString(MediaFormat.KEY_MIME));
}

大概流程:构建 MediaExtractor 对象,传入需要分析音视频文件路径,得到文件的 Track ,然后得到对应的 MediaFormat

2.3、创建解码器

try {
    mediaCodec =
    MediaCodec.createDecoderByType(mediaFormat.getString(MediaFormat.KEY_MIME));
} catch (IOException e) {
    e.printStackTrace();
}

2.4、配置解码器

最后一个参数 Flag 不是编码不传有效值,所以传 0

mediaCodec.configure(mediaFormat, null, null, 0);

/** Invalid audio channel configuration */
/** @deprecated Use {@link #CHANNEL_INVALID} instead.  */
@Deprecated    public static final int CHANNEL_CONFIGURATION_INVALID   = 0;
/** Default audio channel configuration */
/** @deprecated Use {@link #CHANNEL_OUT_DEFAULT} or {@link #CHANNEL_IN_DEFAULT} instead.  */
@Deprecated    public static final int CHANNEL_CONFIGURATION_DEFAULT   = 1;
/** Mono audio configuration */
/** @deprecated Use {@link #CHANNEL_OUT_MONO} or {@link #CHANNEL_IN_MONO} instead.  */
@Deprecated    public static final int CHANNEL_CONFIGURATION_MONO      = 2;
/** Stereo (2 channel) audio configuration */
/** @deprecated Use {@link #CHANNEL_OUT_STEREO} or {@link #CHANNEL_IN_STEREO} instead.  */
@Deprecated    public static final int CHANNEL_CONFIGURATION_STEREO    = 3;

/** Invalid audio channel mask */
public static final int CHANNEL_INVALID = 0;
/** Default audio channel mask */
public static final int CHANNEL_OUT_DEFAULT = 1;

// Output channel mask definitions below are translated to the native values defined in
//  in /system/media/audio/include/system/audio.h in the JNI code of AudioTrack
public static final int CHANNEL_OUT_FRONT_LEFT = 0x4;
public static final int CHANNEL_OUT_FRONT_RIGHT = 0x8;
public static final int CHANNEL_OUT_FRONT_CENTER = 0x10;
public static final int CHANNEL_OUT_LOW_FREQUENCY = 0x20;
public static final int CHANNEL_OUT_BACK_LEFT = 0x40;
public static final int CHANNEL_OUT_BACK_RIGHT = 0x80;
public static final int CHANNEL_OUT_FRONT_LEFT_OF_CENTER = 0x100;
public static final int CHANNEL_OUT_FRONT_RIGHT_OF_CENTER = 0x200;
public static final int CHANNEL_OUT_BACK_CENTER = 0x400;
public static final int CHANNEL_OUT_SIDE_LEFT = 0x800;
public static final int CHANNEL_OUT_SIDE_RIGHT = 0x1000;
/** @hide */
public static final int CHANNEL_OUT_TOP_CENTER = 0x2000;
/** @hide */
public static final int CHANNEL_OUT_TOP_FRONT_LEFT = 0x4000;
/** @hide */
public static final int CHANNEL_OUT_TOP_FRONT_CENTER = 0x8000;
/** @hide */
public static final int CHANNEL_OUT_TOP_FRONT_RIGHT = 0x10000;
/** @hide */
public static final int CHANNEL_OUT_TOP_BACK_LEFT = 0x20000;
/** @hide */
public static final int CHANNEL_OUT_TOP_BACK_CENTER = 0x40000;
/** @hide */
public static final int CHANNEL_OUT_TOP_BACK_RIGHT = 0x80000;

public static final int CHANNEL_OUT_MONO = CHANNEL_OUT_FRONT_LEFT;
public static final int CHANNEL_OUT_STEREO = (CHANNEL_OUT_FRONT_LEFT
                                            | CHANNEL_OUT_FRONT_RIGHT);
// aka QUAD_BACK
public static final int CHANNEL_OUT_QUAD = (CHANNEL_OUT_FRONT_LEFT
                                          | CHANNEL_OUT_FRONT_RIGHT
                                          | CHANNEL_OUT_BACK_LEFT
                                          | CHANNEL_OUT_BACK_RIGHT);
/** @hide */
public static final int CHANNEL_OUT_QUAD_SIDE = (CHANNEL_OUT_FRONT_LEFT
                                               | CHANNEL_OUT_FRONT_RIGHT
                                               | CHANNEL_OUT_SIDE_LEFT
                                               | CHANNEL_OUT_SIDE_RIGHT);
public static final int CHANNEL_OUT_SURROUND = (CHANNEL_OUT_FRONT_LEFT
                                              | CHANNEL_OUT_FRONT_RIGHT
                                              | CHANNEL_OUT_FRONT_CENTER
                                              | CHANNEL_OUT_BACK_CENTER);
// aka 5POINT1_BACK
public static final int CHANNEL_OUT_5POINT1 = (CHANNEL_OUT_FRONT_LEFT
                                             | CHANNEL_OUT_FRONT_RIGHT
                                             | CHANNEL_OUT_FRONT_CENTER
                                             | CHANNEL_OUT_LOW_FREQUENCY
                                             | CHANNEL_OUT_BACK_LEFT
                                             | CHANNEL_OUT_BACK_RIGHT);
/** @hide */
public static final int CHANNEL_OUT_5POINT1_SIDE = (CHANNEL_OUT_FRONT_LEFT
                                                  | CHANNEL_OUT_FRONT_RIGHT
                                                  | CHANNEL_OUT_FRONT_CENTER
                                                  | CHANNEL_OUT_LOW_FREQUENCY
                                                  | CHANNEL_OUT_SIDE_LEFT
                                                  | CHANNEL_OUT_SIDE_RIGHT);
// different from AUDIO_CHANNEL_OUT_7POINT1 used internally, and not accepted by AudioRecord.
/** @deprecated Not the typical 7.1 surround configuration. Use {@link #CHANNEL_OUT_7POINT1_SURROUND} instead. */
@Deprecated    public static final int CHANNEL_OUT_7POINT1 = (CHANNEL_OUT_FRONT_LEFT
                                                    | CHANNEL_OUT_FRONT_RIGHT
                                                    | CHANNEL_OUT_FRONT_CENTER
                                                    | CHANNEL_OUT_LOW_FREQUENCY
                                                    | CHANNEL_OUT_BACK_LEFT
                                                    | CHANNEL_OUT_BACK_RIGHT 
                                                    | CHANNEL_OUT_FRONT_LEFT_OF_CENTER
                                                    | CHANNEL_OUT_FRONT_RIGHT_OF_CENTER);
// matches AUDIO_CHANNEL_OUT_7POINT1
public static final int CHANNEL_OUT_7POINT1_SURROUND = (
        CHANNEL_OUT_FRONT_LEFT | CHANNEL_OUT_FRONT_CENTER | CHANNEL_OUT_FRONT_RIGHT |
        CHANNEL_OUT_SIDE_LEFT | CHANNEL_OUT_SIDE_RIGHT |
        CHANNEL_OUT_BACK_LEFT | CHANNEL_OUT_BACK_RIGHT |
        CHANNEL_OUT_LOW_FREQUENCY);
// CHANNEL_OUT_ALL is not yet defined; if added then it should match AUDIO_CHANNEL_OUT_ALL

(41条消息) 一次搞懂 Android 音频开发_Android世界的小学生的博客-CSDN博客

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值