首先可以先从下面列出的参考中进行对整个Audio Stream进行一定程度的理解
参考:
https://github.com/muhku/FreeStreamer
https://github.com/mattgallagher/AudioStreamer
http://msching.github.io/blog/2014/07/09/audio-in-ios-3/
http://msching.github.io/blog/2014/08/02/audio-in-ios-5/
主要数据结构:
typedef struct AudioStream{
AudioFileStreamID mAudioFileStream;
AudioQueueRef mQueue;
AudioQueueBufferRef mBuffer[kNumbersOfBuffer];
AudioStreamPacketDescription mPacketDecs[kMaxPacketDescs];
AudioStreamBasicDescription mAudioStreamBasicDesc;
unsigned int mFillBufferIndex;
UInt32 mPacketBufferSize;
size_t mBytesFilled;
size_t mPacketsFilled;
bool mInused[kNumbersOfBuffer];
NSInteger mBuffersUsed;
}AS;
typedef struct StreamFile{
UInt32 mBitRate;
NSInteger mFileLength;
NSInteger mDataOffset;
UInt64 mAudioDataByteCount;
}SF;
extern NSString * const ASStatusChangedNotification;
@interface FileStreamPlayer : NSObject{
AS audioStream;
BOOL discontinuous;
NSThread *internalThread;
NSURL *url;
NSDictionary *httpHeader;
NSString *fileExtension;
SF streamFile;
double sampleRate;
double packetDuration;
CFReadStreamRef stream;
pthread_mutex_t queueBuffersMutex;
pthread_cond_t queueBufferReadyCondition;
AudioStreamState state;
AudioStreamStopReason stopReason;
NSNotificationCenter *notificationCenter;
}
首先,要实现AudioStream的获取,通过CFNetWorking的库来发送HTTP请求,并通过建立客户端接受数据将数据加入到当前RunLoop中:
CFHTTPMessageRef message= CFHTTPMessageCreateRequest(NULL,
(CFStringRef)@"GET",
(CFURLRef)url,
kCFHTTPVersion1_1);
stream = CFReadStreamCreateForHTTPRequest(NULL, message);
CFRelease(message);
CFDictionaryRef proxySettings = CFNetworkCopySystemProxySettings();
CFReadStreamSetProperty(stream, kCFStreamPropertyHTTPProxy, proxySettings);
CFRelease(proxySettings);
CFStreamClientContext context = {0, self, NULL, NULL, NULL};
CFReadStreamSetClient(
stream,
kCFStreamEventHasBytesAvailable |
kCFStreamEventErrorOccurred |
kCFStreamEventEndEncountered,
ASReadStreamCallBack,
&context);
CFReadStreamScheduleWithRunLoop(stream, CFRunLoopGetCurrent(), kCFRunLoopCommonModes);
这里会要求实现网络流数据获取后的回调函数:ReadStreamCallBack
static void ASReadStreamCallBack(CFReadStreamRef aStream,
CFStreamEventType eventType,
void* inClientInfo) {
FileStreamPlayer* this = (FileStreamPlayer *)inClientInfo;
[this handleReadFromStream:aStream eventType:eventType];
}
函数参数中要求:当前数据流,流事件类型,以及上下文对象
成功读取数据以后,通过:AudioFileStreamOpen将数据打开,可以理解成AudioFile读取时候打开一个数据流,然后进行解析
AudioFileStreamOpen
AudioFileStreamParseBytes
OpenStream中要求填入两个回调函数:PropertyListenerProc和ASPacketsProc,分别作为流信息和流解析分离帧的回调之后的分离帧解析中,每次都会回调刚刚填入的回调函数
static void ASPropertyListenerProc(//callback from audioStream header
void * inClientData,
AudioFileStreamID inAudioFileStream,
AudioFileStreamPropertyID inPropertyID,
UInt32 * ioFlags) {
FileStreamPlayer* this = (FileStreamPlayer *)inClientData;
[this handleFileStreamPropertyChange:inAudioFileStream
fileStreamPropertyID:inPropertyID
ioFlags:ioFlags];
}
static void ASPacketsProc(//callback from audioStream body
void * inClientData,
UInt32 inNumberBytes,
UInt32 inNumberPackets,
const void * inInputData,
AudioStreamPacketDescription *inPacketDescriptions) {
FileStreamPlayer* this = (FileStreamPlayer *)inClientData;
[this handleAudioPackets:inInputData
numberBytes:inNumberBytes
numberPackets:inNumberPackets
packetDescriptions:inPacketDescriptions];
}
接下来的操作就是包括流信息解析后存入相应的数据结构中,利用Buffer存储塞到AudioQueue中去进行运行,设置相应的AudioQueue的回调函数对完成任务后的buffer进行再次调用:
AudioQueueNewOutput(&audioStream.mAudioStreamBasicDesc,
ASAudioQueueOutputCallback,
self, NULL, NULL, 0, &audioStream.mQueue);
static void ASAudioQueueOutputCallback(//callBack from Running Queue Buffers completed
void* inClientData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer) {
FileStreamPlayer* this = (FileStreamPlayer*)inClientData;
[this handleCompleteBuffer:inAQ
buffer:inBuffer];
}
接下来就可以对AudioQueue进行各种操作了,播放暂停和停止等。另外,还要注意各个操作过程中线程资源之间的调用必须要用锁进行保护,在这里我主要参考了AudioStreamer,所以采用了互斥锁pthread_和@synchronized(self)。