Audio Implementation

Audio Implementation

This page explains how to implement the audio Hardware Abstraction Layer (HAL) and configure the shared library.

Implementing the HAL


The audio HAL is composed of three different interfaces that you must implement:

  • hardware/libhardware/include/hardware/audio.h - represents the main functions of an audio device.
  • hardware/libhardware/include/hardware/audio_policy.h - represents the audio policy manager, which handles things like audio routing and volume control policies.
  • hardware/libhardware/include/hardware/audio_effect.h - represents effects that can be applied to audio such as downmixing, echo, or noise suppression.

For an example, refer to the implementation for the Galaxy Nexus at device/samsung/tuna/audio.

In addition to implementing the HAL, you need to create a device/<company_name>/<device_name>/audio/audio_policy.conf file that declares the audio devices present on your product. For an example, see the file for the Galaxy Nexus audio hardware in device/samsung/tuna/audio/audio_policy.conf. Also, see the audio header files for a reference of the properties that you can define.

In the Android M release and later, the paths are:
system/media/audio/include/system/audio.h
system/media/audio/include/system/audio_policy.h

In Android 5.1 and earlier, the paths are:
system/core/include/system/audio.h
system/core/include/system/audio_policy.h

Multi-channel support

If your hardware and driver supports multichannel audio via HDMI, you can output the audio stream directly to the audio hardware. This bypasses the AudioFlinger mixer so it doesn't get downmixed to two channels.

The audio HAL must expose whether an output stream profile supports multichannel audio capabilities. If the HAL exposes its capabilities, the default policy manager allows multichannel playback over HDMI.

For more implementation details, see the device/samsung/tuna/audio/audio_hw.c in the Android 4.1 release.

To specify that your product contains a multichannel audio output, edit the audio_policy.conf file to describe the multichannel output for your product. The following is an example from the Galaxy Nexus that shows a "dynamic" channel mask, which means the audio policy manager queries the actual channel masks supported by the HDMI sink after connection. You can also specify a static channel mask like AUDIO_CHANNEL_OUT_5POINT1.

audio_hw_modules {
  primary {
    outputs {
        ...
        hdmi {
          sampling_rates 44100|48000
          channel_masks dynamic
          formats AUDIO_FORMAT_PCM_16_BIT
          devices AUDIO_DEVICE_OUT_AUX_DIGITAL
          flags AUDIO_OUTPUT_FLAG_DIRECT
        }
        ...
    }
    ...
  }
  ...
}

AudioFlinger's mixer downmixes the content to stereo automatically when sent to an audio device that does not support multichannel audio.

Media codecs

Ensure the audio codecs your hardware and drivers support are properly declared for your product. For details on declaring supported codecs, see Exposing Codecs to the Framework.

Configuring the shared library


You need to package the HAL implementation into a shared library and copy it to the appropriate location by creating an Android.mk file:

  1. Create a device/<company_name>/<device_name>/audio directory to contain your library's source files.
  2. Create an Android.mk file to build the shared library. Ensure that the Makefile contains the following line:
    LOCAL_MODULE := audio.primary.<device_name>

    Notice your library must be named audio.primary.<device_name>.so so that Android can correctly load the library. The "primary" portion of this filename indicates that this shared library is for the primary audio hardware located on the device. The module names audio.a2dp.<device_name> and audio.usb.<device_name> are also available for bluetooth and USB audio interfaces. Here is an example of an Android.mk from the Galaxy Nexus audio hardware:

    LOCAL_PATH := $(call my-dir)

    include $(CLEAR_VARS)

    LOCAL_MODULE := audio.primary.tuna
    LOCAL_MODULE_RELATIVE_PATH := hw
    LOCAL_SRC_FILES := audio_hw.c ril_interface.c
    LOCAL_C_INCLUDES += \
            external/tinyalsa/include \
            $(call include-path-for, audio-utils) \
            $(call include-path-for, audio-effects)
    LOCAL_SHARED_LIBRARIES := liblog libcutils libtinyalsa libaudioutils libdl
    LOCAL_MODULE_TAGS := optional

    include $(BUILD_SHARED_LIBRARY)
  3. If your product supports low latency audio as specified by the Android CDD, copy the corresponding XML feature file into your product. For example, in your product's device/<company_name>/<device_name>/device.mk Makefile:
    PRODUCT_COPY_FILES := ...

    PRODUCT_COPY_FILES += \
    frameworks/native/data/etc/android.hardware.audio.low_latency.xml:system/etc/permissions/android.hardware.audio.low_latency.xml \
  4. Copy the audio_policy.conf file that you created earlier to the system/etc/ directory in your product's device/<company_name>/<device_name>/device.mk Makefile. For example:
    PRODUCT_COPY_FILES += \
            device/samsung/tuna/audio/audio_policy.conf:system/etc/audio_policy.conf
  5. Declare the shared modules of your audio HAL that are required by your product in the product's device/<company_name>/<device_name>/device.mk Makefile. For example, the Galaxy Nexus requires the primary and bluetooth audio HAL modules:
    PRODUCT_PACKAGES += \
            audio.primary.tuna \
            audio.a2dp.default

Audio pre-processing effects


The Android platform provides audio effects on supported devices in the audiofx package, which is available for developers to access. For example, on the Nexus 10, the following pre-processing effects are supported:

Pre-processing effects are paired with the use case mode in which the pre-processing is requested . In Android app development, a use case is referred to as an AudioSource; and app developers request to use the AudioSource abstraction instead of the actual audio hardware device. The Android Audio Policy Manager maps an AudioSource to the actual hardware with AudioPolicyManagerBase::getDeviceForInputSource(int inputSource). The following sources are exposed to developers:

  • android.media.MediaRecorder.AudioSource.CAMCORDER
  • android.media.MediaRecorder.AudioSource.VOICE_COMMUNICATION
  • android.media.MediaRecorder.AudioSource.VOICE_CALL
  • android.media.MediaRecorder.AudioSource.VOICE_DOWNLINK
  • android.media.MediaRecorder.AudioSource.VOICE_UPLINK
  • android.media.MediaRecorder.AudioSource.VOICE_RECOGNITION
  • android.media.MediaRecorder.AudioSource.MIC
  • android.media.MediaRecorder.AudioSource.DEFAULT

The default pre-processing effects applied for each AudioSource are specified in the /system/etc/audio_effects.conf file. To specify your own default effects for every AudioSource, create a /system/vendor/etc/audio_effects.conf file and specify the pre-processing effects to turn on. For an example, see the implementation for the Nexus 10 in device/samsung/manta/audio_effects.conf. AudioEffect instances acquire and release a session when created and destroyed, enabling the effects (such as the Loudness Enhancer) to persist throughout the duration of the session.

Warning: For the VOICE_RECOGNITION use case, do not enable the noise suppression pre-processing effect. It should not be turned on by default when recording from this audio source, and you should not enable it in your own audio_effects.conf file. Turning on the effect by default will cause the device to fail the compatibility requirement regardless of whether this was on by default due to configuration file , or the audio HAL implementation's default behavior.

The following example enables pre-processing for the VoIP AudioSource and Camcorder AudioSource. By declaring the AudioSource configuration in this manner, the framework will automatically request from the audio HAL the use of those effects.

pre_processing {
   voice_communication {
       aec {}
       ns {}
   }
   camcorder {
       agc {}
   }
}

Source tuning

For AudioSource tuning, there are no explicit requirements on audio gain or audio processing with the exception of voice recognition (VOICE_RECOGNITION).

The requirements for voice recognition are:

  • "flat" frequency response (+/- 3dB) from 100Hz to 4kHz
  • close-talk config: 90dB SPL reads RMS of 2500 (16bit samples)
  • level tracks linearly from -18dB to +12dB relative to 90dB SPL
  • THD < 1% (90dB SPL in 100 to 4000Hz range)
  • 8kHz sampling rate (anti-aliasing)
  • Effects/pre-processing must be disabled by default

Examples of tuning different effects for different sources are:

  • Noise Suppressor
    • Tuned for wind noise suppressor for CAMCORDER
    • Tuned for stationary noise suppressor for VOICE_COMMUNICATION
  • Automatic Gain Control
    • Tuned for close-talk for VOICE_COMMUNICATION and main phone mic
    • Tuned for far-talk for CAMCORDER

More information

For more information, see:

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
要将 AudioRecord 实时录制的音频数据编码成 Opus 格式,您可以使用 `libopus` 库和 `OpusEncoder` 类。这个类实现了 Opus 编码器,可以将音频数据编码成 Opus 格式。 以下是将音频数据实时编码成 Opus 格式的示例代码: ```java // 配置 AudioRecord 对象 int audioSource = MediaRecorder.AudioSource.MIC; int sampleRate = 48000; int channelConfig = AudioFormat.CHANNEL_IN_MONO; int audioFormat = AudioFormat.ENCODING_PCM_16BIT; int bufferSize = AudioRecord.getMinBufferSize(sampleRate, channelConfig, audioFormat); AudioRecord recorder = new AudioRecord(audioSource, sampleRate, channelConfig, audioFormat, bufferSize); // 配置 Opus 编码器 int channels = 1; int application = OpusConstants.OPUS_APPLICATION_VOIP; OpusEncoder encoder = new OpusEncoder(sampleRate, channels, application); // 创建 Opus 文件 File outputFile = new File("output.opus"); FileOutputStream outputStream = new FileOutputStream(outputFile); // 开始录音并编码 byte[] buffer = new byte[bufferSize]; encoder.init(); recorder.startRecording(); while (isRecording) { int readSize = recorder.read(buffer, 0, bufferSize); byte[] encodedData = new byte[readSize]; int encodedSize = encoder.encode(buffer, 0, readSize, encodedData); outputStream.write(encodedData, 0, encodedSize); } encoder.release(); // 停止录音并释放资源 recorder.stop(); recorder.release(); outputStream.close(); ``` 注意:为了能够使用 `libopus` 库,您需要在项目的 `build.gradle` 文件中添加以下依赖: ```groovy implementation 'com.zakopay:opus-android:1.0.0' ``` 此外,您还需要在 `AndroidManifest.xml` 文件中添加 `android.permission.RECORD_AUDIO` 权限。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值