metaRTC5.0 IPC版本只做了图像传输示例,没有做声音传输,根据图像传输的逻辑看了一下源代码,决定依样画瓢,首先在YangIpcEncodeer.h文件中的YangEncoderSession结构体中依照 YangVideoEncoderBuffer2 *out_videoBuffer;
增加一个 YangAudioEncoderBuffer2 *out_audioBuffer; YangAudioEncoderBuffer2 其实就是YangVideoEncoderBuffer2 ,只是我拷贝了YangVideoEncoderBuffer2 的内容改了一下叫法,在YangIpc.h中增加一个audioBuffer 跟videoBuffer类似处理。
第一步为我们应用层写缓存打下了基础。
接下来,一定要搜一下out_videoBuffer 和videoBuffer,凡是他们干过,out_audioBuffer和audioBuffer分别也干一遍,有耐心的可以认真看一下实现了哪些功能(主要就是初始化和跟底层rtcsession的发送缓冲区建立联系)。比如
在Yang_ipc_initRtc中依样画瓢,这里是不是就清楚了为什么上层是out 底层是in!!!
session->publish->session.encoder.session.out_audioBuffer=session->audioBuffer;
session->publish->session.rtc.session.in_audioBuffer=session->audioBuffer;
session->publish->session.encoder.session.out_videoBuffer=session->videoBuffer;
session->publish->session.rtc.session.in_videoBuffer=session->videoBuffer;
接下来在YangIpcRtc.c文件里面进一步骚操作,找到void* yang_ipc_rtcrecv_start_thread(void *obj) 函数这里是发送数据的主线程,audio数据和发送函数基本上作者已经准备好了,只需要在循环里面添加如下代码:
注意只需添加发送 audio stream 这一点即可
//发送 audio stream
if (session->in_audioBuffer && session->in_audioBuffer->size(&session->in_audioBuffer->mediaBuffer) > 0) {
audioFrame.payload = session->in_audioBuffer->getEAudioRef(&session->in_audioBuffer->mediaBuffer,&audioFrame);
if (audioFrame.frametype ==YangFrameTypeAudio)
{
data.setAudioData(data.context,&audioFrame);
ret = yang_ipc_rtcrecv_publishAudioData(session,&data);
}
}
if ((session->in_videoBuffer && session->in_videoBuffer->size(&session->in_videoBuffer->mediaBuffer) == 0)) {
yang_usleep(2000);
continue;
}
//发送 video stream
if (session->in_videoBuffer && session->in_videoBuffer->size(&session->in_videoBuffer->mediaBuffer) > 0) {
videoFrame.payload = session->in_videoBuffer->getEVideoRef(&session->in_videoBuffer->mediaBuffer,&videoFrame);
if (videoFrame.frametype == YANG_Frametype_I) {
好了,是不是很简单,主要是我只会ctrl c+v。
最后一步,就是应用层的处理了。首先,你需要在建一个新的发送audio帧的线程,然后就是按下面例程写一个音频文件发送的线程循环发就是了
uint8_t audiobuffer[2*1024];
PVOID sendAudioPackets(PVOID args)
{
STATUS retStatus = STATUS_SUCCESS;
pStreamManage pstreammanage = gpStreamManage;//(pStreamManage) args;
EncoderSession* session=(EncoderSession*)args;
// Frame frame;
UINT32 fileIndex = 0, frameSize;
CHAR filePath[MAX_PATH_LEN + 1];
UINT32 i;
STATUS status;
if (pstreammanage == NULL) {
printf("[metaRTC Master] sendAudioPackets(): operation returned status code: 0x%08x \n", STATUS_NULL_ARG);
goto CleanUp;
}
YangFrame audioFrame;
// frame.presentationTs = 0;
memset(&audioFrame,0,sizeof(YangFrame));
audioFrame.payload=audiobuffer;
while (!ATOMIC_LOAD_BOOL(&pstreammanage->appTerminateFlag)) {
fileIndex = fileIndex % NUMBER_OF_OPUS_FRAME_FILES + 1;
snprintf(filePath, MAX_PATH_LEN, "../opusSampleFrames/sample-%03d.opus", fileIndex);
retStatus = readFrameFromDisk(NULL, &frameSize, filePath);
if (retStatus != STATUS_SUCCESS) {
printf("[metaRTC Master] readFrameFromDisk(): operation returned status code: 0x%08x \n", retStatus);
goto CleanUp;
}
audioFrame.nb = frameSize;
retStatus = readFrameFromDisk(audioFrame.payload, &frameSize, filePath);
if (retStatus != STATUS_SUCCESS) {
printf("[metaRTC Master] readFrameFromDisk(): operation returned status code: 0x%08x \n", retStatus);
goto CleanUp;
}
audioFrame.frametype=YangFrameTypeAudio;
audioFrame.dts=audioFrame.pts=GETTIME();
MUTEX_LOCK(pstreammanage->streamingSessionListReadLock);
session->out_audioBuffer->putEAudio(&session->out_audioBuffer->mediaBuffer,&audioFrame);
if(debuglog){
//if (isKeyframe == YANG_Frametype_I) {
printf("audio ts-> %lld send %d bytes to rtp:",audioFrame.dts,audioFrame.nb);
dumphex(audioFrame.payload,audioFrame.nb<200?audioFrame.nb:200);
printf("\n");
}
MUTEX_UNLOCK(pstreammanage->streamingSessionListReadLock);
THREAD_SLEEP(SAMPLE_AUDIO_FRAME_DURATION);
}
CleanUp:
return (PVOID) (ULONG_PTR) retStatus;
}
有些函数没有见过的,不要紧张,我用了aws 的pic库,照这个意思写自己的就行了,为什么只写博客,不直接提交代码呢,我认为自己写一遍体会要深一些,而且我写的代码也不见得好,抛砖引玉吧。好了,欢迎各位大佬批评指正。opus文件请移步kvs开源代码去下载,学习就是不断baidu github。慢慢的就会ctrl c +v了。