Ios 实现麦克风捕获和AAC编码

在Ios中,实现打开和捕获麦克风大多是用的AVCaptureSession这个组件来实现的,它可以不仅可以实现音频捕获,还可以实现视频的捕获。本文将主要实现麦克风音频的捕获和编码。

针对打开麦克风和捕获音频的代码,网上也有一些,我就简单的整理了一下:

首先,我们需要定义一个AVCaptureSession类型的变量,它是架起在麦克风设备和数据输出上的一座桥,通过它可以方便的得到麦克风的实时原始数据。

AVCaptureSession *m_capture

同时,定义一组函数,用来打开和关闭麦克风;为了能使数据顺利的导出,你还需要实现AVCaptureAudioDataOutputSampleBufferDelegate这个协议

-(void)open;
-(void)close;
-(BOOL)isOpen;
下面我们将分别实现上述参数函数,来完成数据的捕获。

-(void)open {
    NSError *error;
    m_capture = [[AVCaptureSession alloc]init];
    AVCaptureDevice *audioDev = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
    if (audioDev == nil)
    {
        CKPrint("Couldn't create audio capture device");
        return ;
    }
    
    // create mic device
    AVCaptureDeviceInput *audioIn = [AVCaptureDeviceInput deviceInputWithDevice:audioDev error:&error];
    if (error != nil)
    {
        CKPrint("Couldn't create audio input");
        return ;
    }
    
    
    // add mic device in capture object
    if ([m_capture canAddInput:audioIn] == NO)
    {
        CKPrint("Couldn't add audio input")
        return ;
    }
    [m_capture addInput:audioIn];
    // export audio data
    AVCaptureAudioDataOutput *audioOutput = [[AVCaptureAudioDataOutput alloc] init];
    [audioOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
    if ([m_capture canAddOutput:audioOutput] == NO)
    {
        CKPrint("Couldn't add audio output");
        return ;
    }
    [m_capture addOutput:audioOutput];
    [audioOutput connectionWithMediaType:AVMediaTypeAudio];
    [m_capture startRunning];
    return ;
}

-(void)close {
    if (m_capture != nil && [m_capture isRunning])
    {
        [m_capture stopRunning];
    }
    
    return;
}
-(BOOL)isOpen {
    if (m_capture == nil)
    {
        return NO;
    }
    
    return [m_capture isRunning];
}
通过上面三个函数,即可完成所有麦克风捕获的准备工作,现在我们就等着数据主动送上门了。要想数据主动送上门,我们还需要实现一个协议接口:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
    char szBuf[4096];
    int  nSize = sizeof(szBuf);
    
#if SUPPORT_AAC_ENCODER
    if ([self encoderAAC:sampleBuffer aacData:szBuf aacLen:&nSize] == YES)
    {
        [g_pViewController sendAudioData:szBuf len:nSize channel:0];
    }
#else //#if SUPPORT_AAC_ENCODER
    AudioStreamBasicDescription outputFormat = *(CMAudioFormatDescriptionGetStreamBasicDescription(CMSampleBufferGetFormatDescription(sampleBuffer)));
    nSize = CMSampleBufferGetTotalSampleSize(sampleBuffer);
    CMBlockBufferRef databuf = CMSampleBufferGetDataBuffer(sampleBuffer);
    if (CMBlockBufferCopyDataBytes(databuf, 0, nSize, szBuf) == kCMBlockBufferNoErr)
    {
        [g_pViewController sendAudioData:szBuf len:nSize channel:outputFormat.mChannelsPerFrame];
    }
#endif
}
到这里,我们的工作也就差不多做完了,所捕获出来的数据是原始的PCM数据。

当然,由于PCM数据本身比较大,不利于网络传输,所以如果需要进行网络传输时,就需要对数据进行编码;Ios系统本身支持多种音频编码格式,这里我们就以AAC为例来实现一个PCM编码AAC的函数。


在Ios系统中,PCM编码AAC的例子,在网上也是一找一大片,但是大多都是不太完整的,而且相当一部分都是E文的,对于某些童鞋而言,这些都是深恶痛绝的。我这里就做做好人,把它们整理了一下,写成了一个函数,方便使用。

在编码前,需要先创建一个编码转换对象

AVAudioConverterRef m_converter;

#if SUPPORT_AAC_ENCODER
-(BOOL)createAudioConvert:(CMSampleBufferRef)sampleBuffer { //根据输入样本初始化一个编码转换器
    if (m_converter != nil)
    {
        return TRUE;
    }
    
    AudioStreamBasicDescription inputFormat = *(CMAudioFormatDescriptionGetStreamBasicDescription(CMSampleBufferGetFormatDescription(sampleBuffer))); // 输入音频格式
    AudioStreamBasicDescription outputFormat; // 这里开始是输出音频格式
    memset(&outputFormat, 0, sizeof(outputFormat));
    outputFormat.mSampleRate       = inputFormat.mSampleRate; // 采样率保持一致
    outputFormat.mFormatID         = kAudioFormatMPEG4AAC;    // AAC编码
    outputFormat.mChannelsPerFrame = 2;
    outputFormat.mFramesPerPacket  = 1024;                    // AAC一帧是1024个字节
    
    AudioClassDescription *desc = [self getAudioClassDescriptionWithType:kAudioFormatMPEG4AAC fromManufacturer:kAppleSoftwareAudioCodecManufacturer];
    if (AudioConverterNewSpecific(&inputFormat, &outputFormat, 1, desc, &m_converter) != noErr)
    {
        CKPrint(@"AudioConverterNewSpecific failed");
        return NO;
    }

    return YES;
}
-(BOOL)encoderAAC:(CMSampleBufferRef)sampleBuffer aacData:(char*)aacData aacLen:(int*)aacLen { // 编码PCM成AAC
    if ([self createAudioConvert:sampleBuffer] != YES)
    {
        return NO;
    }
    
    CMBlockBufferRef blockBuffer = nil;
    AudioBufferList  inBufferList;
    if (CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &inBufferList, sizeof(inBufferList), NULL, NULL, 0, &blockBuffer) != noErr)
    {
        CKPrint(@"CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer failed");
        return NO;
    }
    // 初始化一个输出缓冲列表
    AudioBufferList outBufferList;
    outBufferList.mNumberBuffers              = 1;
    outBufferList.mBuffers[0].mNumberChannels = 2;
    outBufferList.mBuffers[0].mDataByteSize   = *aacLen; // 设置缓冲区大小
    outBufferList.mBuffers[0].mData           = aacData; // 设置AAC缓冲区
    UInt32 outputDataPacketSize               = 1;
    if (AudioConverterFillComplexBuffer(m_converter, inputDataProc, &inBufferList, &outputDataPacketSize, &outBufferList, NULL) != noErr)
    {
        CKPrint(@"AudioConverterFillComplexBuffer failed");
        return NO;
    }
    
    *aacLen = outBufferList.mBuffers[0].mDataByteSize; //设置编码后的AAC大小
    CFRelease(blockBuffer);
    return YES;
}
-(AudioClassDescription*)getAudioClassDescriptionWithType:(UInt32)type fromManufacturer:(UInt32)manufacturer { // 获得相应的编码器
    static AudioClassDescription audioDesc;
    
    UInt32 encoderSpecifier = type, size = 0;
    OSStatus status;
    
    memset(&audioDesc, 0, sizeof(audioDesc));
    status = AudioFormatGetPropertyInfo(kAudioFormatProperty_Encoders, sizeof(encoderSpecifier), &encoderSpecifier, &size);
    if (status)
    {
        return nil;
    }
    
    uint32_t count = size / sizeof(AudioClassDescription);
    AudioClassDescription descs[count];
    status = AudioFormatGetProperty(kAudioFormatProperty_Encoders, sizeof(encoderSpecifier), &encoderSpecifier, &size, descs);
    for (uint32_t i = 0; i < count; i++)
    {
        if ((type == descs[i].mSubType) && (manufacturer == descs[i].mManufacturer))
        {
            memcpy(&audioDesc, &descs[i], sizeof(audioDesc));
            break;
        }
    }
    return &audioDesc;
}
OSStatus inputDataProc(AudioConverterRef inConverter, UInt32 *ioNumberDataPackets, AudioBufferList *ioData,AudioStreamPacketDescription **outDataPacketDescription, void *inUserData) { //<span style="font-family: Arial, Helvetica, sans-serif;">AudioConverterFillComplexBuffer 编码过程中,会要求这个函数来填充输入数据,也就是原始PCM数据</span>
    AudioBufferList bufferList = *(AudioBufferList*)inUserData;
    ioData->mBuffers[0].mNumberChannels = 1;
    ioData->mBuffers[0].mData           = bufferList.mBuffers[0].mData;
    ioData->mBuffers[0].mDataByteSize   = bufferList.mBuffers[0].mDataByteSize;
    return noErr;
}
#endif
好了,世界是那么美好,一个函数即可所有的事情搞定了。当你需要进行AAC编码时,调用encoderAAC这个函数就可以了(在上面有完整的代码)

 char szBuf[4096];
 int  nSize = sizeof(szBuf);
 if ([self encoderAAC:sampleBuffer aacData:szBuf aacLen:&nSize] == YES)
 {
     // do something 
 }

  • 3
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 3
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值