用AudioRecord实现录音

Labels: 

Version 1.5 of the Android SDK introduced a bunch of cool new features for developers. Though not as glamorous as some APIs, the new audio manipulation classes -  AudioTrack and  AudioRecord - offer powerful functionality to developers looking to manipulate raw audio.

These classes let you record audio directly from the audio input hardware of the device, and stream PCM audio buffers to the audio hardware for playback. Strong sauce indeed for those of you looking to have more control over audio input and playback.

Enough talk - on to the code. To test out these new APIs I put together a simple Android app that listens to 10 seconds of input from the microphone and then plays it back through the speaker in reverse. Perfect for decoding secret messages in old Beatles albums.  

Start with the recording code. It's designed to record the incoming audio to a file on the SD Card that we'll read and playback later. As per the latest security patch, your application requires a uses-permission to record audio.

<uses-permission android:name="android.permission.RECORD_AUDIO"></uses-permission>

The recording code here records a new set of 16bit mono audio at 11025Hz to reverseme.pcm on the SD card.

public void record() {
  int frequency = 11025;
  int channelConfiguration = AudioFormat.CHANNEL_CONFIGURATION_MONO;
  int audioEncoding = AudioFormat.ENCODING_PCM_16BIT;
  File file = new File(Environment.getExternalStorageDirectory().getAbsolutePath() + "/reverseme.pcm");
   
  // Delete any previous recording.
  if (file.exists())
    file.delete();


  // Create the new file.
  try {
    file.createNewFile();
  } catch (IOException e) {
    throw new IllegalStateException("Failed to create " + file.toString());
  }
   
  try {
    // Create a DataOuputStream to write the audio data into the saved file.
    OutputStream os = new FileOutputStream(file);
    BufferedOutputStream bos = new BufferedOutputStream(os);
    DataOutputStream dos = new DataOutputStream(bos);
     
    // Create a new AudioRecord object to record the audio.
    int bufferSize = AudioRecord.getMinBufferSize(frequency, channelConfiguration,  audioEncoding);
    AudioRecord audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, 
                                              frequency, channelConfiguration, 
                                              audioEncoding, bufferSize);
   
    short[] buffer = new short[bufferSize];   
    audioRecord.startRecording();


    while (isRecording) {
      int bufferReadResult = audioRecord.read(buffer, 0, bufferSize);
      for (int i = 0; i < bufferReadResult; i++)
        dos.writeShort(buffer[i]);
    }


    audioRecord.stop();
    dos.close();
   
  } catch (Throwable t) {
    Log.e("AudioRecord","Recording Failed");
  }
}

Next we create a playback method that reads the file and plays back the contents in reverse. It's important to set the audio data encoding (here PCM 16 bits), channel, and frequency values to the same settings used in the AudioRecord object. Then again, playing with the playback frequency might be have  its own appeal.


public void play() {
  // Get the file we want to playback.
  File file = new File(Environment.getExternalStorageDirectory().getAbsolutePath() + "/reverseme.pcm");
  // Get the length of the audio stored in the file (16 bit so 2 bytes per short)
  // and create a short array to store the recorded audio.
  int musicLength = (int)(file.length()/2);
  short[] music = new short[musicLength];


  try {
    // Create a DataInputStream to read the audio data back from the saved file.
    InputStream is = new FileInputStream(file);
    BufferedInputStream bis = new BufferedInputStream(is);
    DataInputStream dis = new DataInputStream(bis);
      
    // Read the file into the music array.
    int i = 0;
    while (dis.available() > 0) {
      music[musicLength-1-i] = dis.readShort();
      i++;
    }


    // Close the input streams.
    dis.close();     


    // Create a new AudioTrack object using the same parameters as the AudioRecord
    // object used to create the file.
    AudioTrack audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, 
                                           11025, 
                                           AudioFormat.CHANNEL_CONFIGURATION_MONO,
                                           AudioFormat.ENCODING_PCM_16BIT,
                                           musicLength, 
                                           AudioTrack.MODE_STREAM);
    // Start playback
    audioTrack.play();
  
    // Write the music buffer to the AudioTrack object
    audioTrack.write(music, 0, musicLength);


  } catch (Throwable t) {
    Log.e("AudioTrack","Playback Failed");
  }
}

Finally, to drive this you need to update your application Activity to call the record and playback methods as appropriate. To keep this example as simple as possible I'm going to record for 10 seconds as soon as the application starts, and playback in reverse as soon as I've finished taking the sample.

To be more useful you'd almost certainly want to perform the playback operation in a Service and on a background thread. 

@Override
public void onCreate(Bundle savedInstanceState) {
  super.onCreate(savedInstanceState);
  setContentView(R.layout.main);
    
  Thread thread = new Thread(new Runnable() {
    public void run() {
      record();
    }     
  });
  thread.start();


  try {
    wait(10000);
  } catch (InterruptedException e) {}
    
  isRecording = false;
    
  try {
    thread.join();
  } catch (InterruptedException e) {}
    
  play();
  finish();
}

The AudioTrack and AudioRecord classes offer a lot more functionality than I've demonstrated here. Using the AudioTrack streaming mode you can do processing of incoming audio and playback in near real time, letting you manipulate incoming or outgoing audio and perform signal processing on raw audio on the device.

Tell us how you've used these APIs in your Android apps in the comments!

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
参考链接http://www.cnblogs.com/Amandaliu/archive/2013/02/04/2891604.html 在链接内容基础上修改了amr编码格式为aac编码格式 Android提供了两个API用于实现录音功能:android.media.AudioRecord、android.media.MediaRecorder。 网上有很多谈论这两个类的资料。现在大致总结下: 1、AudioRecord 主要是实现边录边播(AudioRecord+AudioTrack)以及对音频的实时处理(如会说话的汤姆猫、语音) 优点:语音的实时处理,可以用代码实现各种音频的封装 缺点:输出是PCM语音数据,如果保存成音频文件,是不能够被播放器播放的,所以必须先写代码实现数据编码以及压缩 示例: 使用AudioRecord录音,并实现WAV格式封装。录音20s,输出的音频文件大概为3.5M左右(已写测试代码) 2、MediaRecorder 已经集成了录音、编码、压缩等,支持少量的录音音频格式,大概有.aac(API = 16) .amr .3gp 优点:大部分以及集成,直接调用相关接口即可,代码量小 缺点:无法实时处理音频;输出的音频格式不是很多,例如没有输出mp3格式文件 示例: 使用MediaRecorder类录音,输出amr格式文件。录音20s,输出的音频文件大概为33K(已写测试代码) 3、音频格式比较 WAV格式:录音质量高,但是压缩率小,文件大 AAC格式:相对于mp3,AAC格式的音质更佳,文件更小;有损压缩;一般苹果或者Android SDK4.1.2(API 16)及以上版本支持播放 AMR格式:压缩比比较大,但相对其他的压缩格式质量比较差,多用于人声,通话录音 至于常用的mp3格式,使用MediaRecorder没有该视频格式输出。一些人的做法是使用AudioRecord录音,然后编码成wav格式,再转换成mp3格式 再贴上一些测试工程。 功能描述: 1、点击“录音WAV文件”,开始录音录音完成后,生成文件/sdcard/FinalAudio.wav 2、点击“录音AMR文件”,开始录音录音完成后,生成文件/sdcard/FinalAudio.amr 3、点击“停止录音”,停止录音,并显示录音输出文件以及该文件大小。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值