rtmp推流h264+aac

一. 概述

本文主要讲述如何使用开源的rtmp库来搭建一个rtmp client,推送h264和aac流到rtmp server。笔者基于两套开源的项目进行了测试:rtmpdump以及srslibrtmp(这个是国人在rtmpdump基础上优化改进扩展的rtmp项目)。

srslibrtmp支持多平台上linux/mac/windows以及arm/mips交叉平台但是对交叉编译的支持不是很好(其对交叉编译工具链的引用不灵活,只支持系统默认的sysroot路径,假如你的交叉编译toolchain在某一sdk中,那么你就得需要修改makefile了)。笔者基于某mips平台ipcamera的sdk进行开发测试,实测srs server以及rtmp client运行正常,且资源占用较理想,唯一的缺点是srslibrtmp所提供的接口对于aac的支持不是很好(8000 sample_rate不支持,个人认为是bug,或者是我使用的不当,也懒得去细看代码了,后面直接通过rtmpdump的接口开发的rtmp client运行正常)。

rtmpdump是官方的rtmp lib,但是这个项目缺乏sample/接口说明(据说rtmp server的架构模式也不是很好,笔者只做rtmp client的学习研究,rtmp server部分不做探讨),用它来开发rtmp client最大的优点是数据的封装可以由自己清楚把握(其实rtmp跟flv都是Adobe公司的,rtmp的数据格式也遵从了flv格式,这个稍后讲解),这与api怎么调用可以我是参考的这两篇博文:rtmp 推送h264 + aac 的数据    使用librtmp库发布直播流  ,不把握的地方可以翻看一下源码。rtmpdump的编译倒是还比较灵活这个可以参考项目的README。

rtmp server可以通过nginx搭(参考 手把手教你搭建Nginx-rtmp流媒体服务器+使用ffmpeg推流)或者使用srslibrtmp里的rtmp server(可以在嵌入式平台上运行)。

 

二. 实现

这里不对rtmp的协议进行介绍(实际上只是调用api实现rtmp client的话不需要了解rtmp协议,只需要知道他是基于tcp协议端口号是1935即可,需要研究的读者可以查阅其他资源)。

rtmp在推送h264和aac流时,流的内容可以分为四类,首先在连接到rtmp server后需要传送的是h264的sequence header以及aac的sequence header(内容格式参考flv格式《Video File Format Specification Version 10》),通过这两包数据decoder才会清楚应该如何解码h264和aac数据,接下来的部分就是h264和aac 负载data部分了。

我们知道flv主要有flv file hrader/previous tag length/flv tag/previous tag length/组成,其中tag类型可分为video/audio/script三类,在rtmp传输时只需要传输flv tag即可,并且这个flv tag是去掉flv tag header的tag(以下我们统一称作rtmp tag),rtmp tag作为rtmp packet的body部分被封进了rtmp packet,rtmp传输的就是rtmp packet(当然就像前面讲的先传送h264 sequence header tag再传送aac sequence header tag,然后是h264和aac data tag),至于如何封装可以参考《Video File Format Specification Version 10》或者参考我的另外一篇blog flv封装H264+AAC[附完整代码]   (当然我的flv muxer存在一些缺陷,只支持8k sample rate/bitformat 16bits,pcm buffer为1024bytes等,由于我比较懒所以就不打算改了,在源码里应该都用"TODO"标记出来了)。

这里有几点注意的得稍微提一下:

(1)RTMPPacket中的m_hasAbsTimestamp这个绝对/相对时间戳我不请出什么意思,我试了两个都可以用(我时间戳的基选的是0);

(2)RTMP_Sendpacket里的发送到queue还是直接发送,没有研究过这两者的差异;

(3)在不同线程中调用RTMP_Sendpacket最好加锁处理,至于原因没有研究。

 

三. 例程

通过一个例程简单演示一下如何通过rtmpdump提供的lib书写rtmpclient代码,向rtmpserver推送h264和aac流;

详细资源请访问我的github: https://github.com/steveliu121/pistreamer/tree/master/rtmp

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <signal.h>
#include <stdint.h>
#include <pthread.h>
#include <errno.h>

#include "myrtmp.h"
#include "aacenc.h"
#include "my_middle_media.h"
#include <my_video_input.h>


/* XXX:WARNING the pcm period buf length should be the common factor of
 * the aac input pcm frame length, or the aac timestamp will be wrong
 * here the pcm period buf length == 1024 bytes,
 * and the aac input pcm frame length == 2048
 */


#define SPS_LEN		28
#define PPS_LEN		6

#define RES_720P
#ifdef RES_720P
#define RESOLUTION_720P MY_VIDEO_RES_720P
#define RES_WIDTH	1280
#define RES_HEIGHT	720
#endif

#define VIDEO_FPS		15
#define VIDEO_TIME_SCALE	90000
#define VIDEO_SAMPLE_DURATION	(VIDEO_TIME_SCALE / VIDEO_FPS)

#define AUDIO_SAMPLERATE	8000
#define AUDIO_CHANNELS		1
#define AUDIO_TIME_SCALE	(AUDIO_SAMPLERATE * AUDIO_CHANNELS)
/* (AUDIO_TIME_SCALE / AUDIO_FPS)audio_period_time = 64ms, fps = 15.625 */
#define AUDIO_SAMPLE_DURATION	512
#define AAC_BITRATE		16000


static const uint8_t sps_buf[SPS_LEN] = {0x27, 0x64, 0x00, 0x29, 0xac, 0x1a, 0xd0, 0x0a,
			0x00, 0xb7, 0x4d, 0xc0, 0x40, 0x40, 0x50, 0x00,
			0x00, 0x03, 0x00, 0x10, 0x00 ,0x00, 0x03, 0x01,
			0xe8, 0xf1 ,0x42, 0x2a};
static const uint8_t pps_buf[PPS_LEN] = {0x28, 0xee, 0x01, 0x34, 0x92, 0x24};
/*
static const uint8_t sps_buf[SPS_LEN + 4] = {0x00, 0x00, 0x00, 0x1c, 0x27, 0x64,
			0x00, 0x29, 0xac, 0x1a, 0xd0, 0x0a,
			0x00, 0xb7, 0x4d, 0xc0, 0x40, 0x40, 0x50, 0x00,
			0x00, 0x03, 0x00, 0x10, 0x00 ,0x00, 0x03, 0x01,
			0xe8, 0xf1 ,0x42, 0x2a};
static const uint8_t pps_buf[PPS_LEN + 4] = {0x00, 0x00, 0x00, 0x06, 0x28, 0xee,
			0x01, 0x04, 0x92, 0x24};
			*/
static int g_exit;
static HANDLE_AACENCODER aac_enc_hd;
static uint8_t aac_decoder_conf[64];
static int aac_decoder_conf_len;
uint32_t g_timestamp_begin;
RTMP *rtmp;
RTMPPacket video_pkt;
RTMPPacket audio_pkt;
pthread_mutex_t av_mutex;

void sig_handle(int sig)
{
	g_exit = 1;
}

void h264_cb(const struct timeval *tv, const void *data,
	const int len, const int keyframe)
{
	int ret = 0;
	uint8_t *buf = NULL;
	int buf_len = 0;
	uint32_t timestamp = 0;
	int buf_payload_len = 0;

	timestamp = (tv->tv_sec * 1000) + (tv->tv_usec / 1000);

	if (g_timestamp_begin == 0)
		g_timestamp_begin = timestamp;

	/* strip sps/pps from I frame and
	 * replace NALU start flag '0x00/0x00/0x00/0x01' with
	 * the length of NALU in BIGENDIAN
	 */
	if (keyframe) {
		buf = (uint8_t *)data + SPS_LEN + PPS_LEN + 2 * 4;
		buf_len = len - SPS_LEN - PPS_LEN - 2 * 4;
	} else {
		buf = (uint8_t *)data;
		buf_len = len;
	}
	buf_payload_len = buf_len - 4;
	buf[0] = buf_payload_len >> 24;
	buf[1] = buf_payload_len >> 16;
	buf[2] = buf_payload_len >> 8;
	buf[3] = buf_payload_len & 0xff;

	video_pkt.m_headerType = RTMP_PACKET_SIZE_LARGE;
	video_pkt.m_nTimeStamp = (timestamp - g_timestamp_begin);
	video_pkt.m_nBodySize = buf_len + 5;//5bytes VIDEODATA tag header
	rtmppacket_alloc(&video_pkt, video_pkt.m_nBodySize);
	rtmp_write_avc_data_tag(video_pkt.m_body, buf, buf_len, keyframe);

	ret = rtmp_isconnected(rtmp);
	if (ret == true) {
		/* true: send to outqueue;false: send directly */
		pthread_mutex_lock(&av_mutex);
		ret = rtmp_sendpacket(rtmp, &video_pkt, true);
		if (ret == false)
			printf("rtmp send video packet fail\n");
		pthread_mutex_unlock(&av_mutex);
	}

	rtmppacket_free(&video_pkt);
}

void audio_cb(const struct timeval *tv, const void *pcm_buf,
	const int pcm_len, const void *spk_buf)
{
	int ret = 0;
	uint8_t *aac_buf = NULL;
	int aac_buf_len = 0;
	uint32_t timestamp = 0;

	timestamp = (tv->tv_sec * 1000) + (tv->tv_usec / 1000);

	if (g_timestamp_begin == 0)
		g_timestamp_begin = timestamp;

	aac_buf_len = aac_encode(aac_enc_hd, pcm_buf, pcm_len, &aac_buf);
	if (aac_buf_len == 0)
		return;

	audio_pkt.m_headerType = RTMP_PACKET_SIZE_LARGE;
	audio_pkt.m_nTimeStamp = (timestamp - g_timestamp_begin);
	audio_pkt.m_nBodySize = aac_buf_len - 7 + 2;//7bytes ADTS header & 2bytes AUDIODATA tag header
	rtmppacket_alloc(&audio_pkt, audio_pkt.m_nBodySize);
	rtmp_write_aac_data_tag(audio_pkt.m_body, aac_buf, aac_buf_len);

	ret = rtmp_isconnected(rtmp);
	if (ret == true) {
		/* true: send to outqueue;false: send directly */
		pthread_mutex_lock(&av_mutex);
		ret = rtmp_sendpacket(rtmp, &audio_pkt, true);
		if (ret == false)
			printf("rtmp send audio packet fail\n");
		pthread_mutex_unlock(&av_mutex);
	}

	rtmppacket_free(&audio_pkt);
}

static int __connect2rtmpsvr(char *url)
{
	int ret = 0;

	rtmp = rtmp_alloc();
	rtmp_init(rtmp);

	rtmp->Link.timeout=5;	//default 30s
	ret = rtmp_setupurl(rtmp, url);
	if (ret == false) {
		printf("rtmp setup url fail\n");
		goto exit;
	}

	rtmp_enablewrite(rtmp);

	ret = rtmp_connect(rtmp, NULL);
	if (ret == false) {
		printf("rtmp connect fail\n");
		goto exit;
	}

	ret = rtmp_connectstream(rtmp, 0);
	if (ret == false) {
		printf("rtmp connect stream fail\n");
		rtmp_close(rtmp);
		goto exit;
	}

	return 0;

exit:
	return -1;
}

static void __rtmp_send_sequence_header(void)
{
	int ret = 0;

/* rtmp send audio/video sequence header frame */
	rtmppacket_reset(&video_pkt);
	rtmppacket_reset(&audio_pkt);

	video_pkt.m_packetType = RTMP_PACKET_TYPE_VIDEO;
	video_pkt.m_nChannel = 0x04;
	video_pkt.m_nInfoField2 = rtmp->m_stream_id;
	video_pkt.m_hasAbsTimestamp = false;

	audio_pkt.m_packetType = RTMP_PACKET_TYPE_AUDIO;
	audio_pkt.m_nChannel = 0x04;
	audio_pkt.m_nInfoField2 = rtmp->m_stream_id;
	audio_pkt.m_hasAbsTimestamp = false;

	video_pkt.m_headerType = RTMP_PACKET_SIZE_LARGE;
	video_pkt.m_nTimeStamp = 0;
	video_pkt.m_nBodySize = SPS_LEN + PPS_LEN + 16;
	rtmppacket_alloc(&video_pkt, video_pkt.m_nBodySize);
	rtmp_write_avc_sequence_header_tag(video_pkt.m_body,
						sps_buf, SPS_LEN,
						pps_buf, PPS_LEN);

	ret = rtmp_isconnected(rtmp);
	if (ret == true) {
		/* true: send to outqueue;false: send directly */
		pthread_mutex_lock(&av_mutex);
		ret = rtmp_sendpacket(rtmp, &video_pkt, true);
		if (ret == false)
			printf("rtmp send video packet fail\n");
		pthread_mutex_unlock(&av_mutex);
	}

	rtmppacket_free(&video_pkt);


	audio_pkt.m_headerType = RTMP_PACKET_SIZE_LARGE;
	audio_pkt.m_nTimeStamp = 0;
	audio_pkt.m_nBodySize = 4;
	rtmppacket_alloc(&audio_pkt, audio_pkt.m_nBodySize);
	rtmp_write_aac_sequence_header_tag(audio_pkt.m_body,
					AUDIO_SAMPLERATE, AUDIO_CHANNELS);

	ret = rtmp_isconnected(rtmp);
	if (ret == true) {
		/* true: send to outqueue;false: send directly */
		pthread_mutex_lock(&av_mutex);
		ret = rtmp_sendpacket(rtmp, &audio_pkt, true);
		if (ret == false)
			printf("rtmp send audio packet fail\n");
		pthread_mutex_unlock(&av_mutex);
	}

	rtmppacket_free(&audio_pkt);/* rtmp send audio/video sequence header frame */
}

int main(int argc, char *argv[])
{
	int ret = 0;

	MYVideoInputChannel chn = {
		.channelId = 0,
		.res = RESOLUTION_720P,
		.fps = VIDEO_FPS,
		.bitrate = 1024,
		.gop = 1,
		.vbr = MY_BITRATE_MODE_CBR,
		.cb = h264_cb
	};

	MYVideoInputOSD osd_info = {
		.pic_enable = 0,
		.pic_path = "/usr/osd_char_lib/argb_2222",
		.pic_x = 200,
		.pic_y = 200,
		.time_enable = 1,
		.time_x = 100,
		.time_y  = 100
	};

	MYAudioInputAttr_aec audio_in = {
		.sampleRate = AUDIO_SAMPLERATE,
		.sampleBit = 16,
		.volume = 95,
		.cb = audio_cb
	};


	signal(SIGTERM, sig_handle);
	signal(SIGINT, sig_handle);

	pthread_mutex_init(&av_mutex, NULL);

	rtmp_logsetlevel(RTMP_LOGINFO);

	if (argc <= 1) {
		printf("Usage: %s <rtmp_url>\n"
		"	rtmp_url	 RTMP stream url to publish\n"
		"For example:\n"
		"	%s rtmp://127.0.0.1:1935/live/livestream\n",
		argv[0], argv[0]);
		exit(-1);
	}


	ret = __connect2rtmpsvr(argv[1]);
	if (ret < 0)
		goto exit;

/* create aacencoder */
	ret = create_aac_encoder(&aac_enc_hd,
				AUDIO_CHANNELS, AUDIO_SAMPLERATE, AAC_BITRATE,
				aac_decoder_conf, &aac_decoder_conf_len);
	if (ret < 0)
		goto exit;/* create aacencoder */

	__rtmp_send_sequence_header();

/* start audio&video device and receive buffers, do muxer in callback */
	MYAV_Context_Init();

	ret = MYVideoInput_Init();
	if (ret)
		goto out;

	ret = MYVideoInput_AddChannel(chn);
	if (ret)
		goto out;

	ret = MYVideoInput_SetOSD(chn.channelId, &osd_info);
	if (ret)
		goto out;

	ret = MYAudioInputOpen(&audio_in);
	if (ret)
		goto out;

	ret = MYVideoInput_Start();
	if (ret)
		goto out;

	ret = MYAudioInputStart();
	if (ret)
		goto out;/* start audio&video device and receive buffers, do muxer in callback */

	while (!g_exit)
		sleep(1);

out:
	MYVideoInput_Uninit();
	MYAudioInputStop();
	MYAudioInputClose();

	MYAV_Context_Release();

exit:
	pthread_mutex_destroy(&av_mutex);
	rtmp_close(rtmp);
	rtmp_free(rtmp);
	rtmp = NULL;

	return ret;

}

 

RTMP(Real-Time Messaging Protocol)是一种用于音频、视频和数据传输的通信协议,通常用于媒体传输。 在RTMP推送H264AAC数据之前,需要先准备好H264AAC数据。以下是RTMP推送H264AAC数据的步骤: 1. 建立RTMP连接 使用RTMP协议的应用程序需要首先建立到媒体服务器的连接。可以使用第三方库如librtmp来实现。连接的过程中需要设置一些连接参数,如服务器地址、端口号、应用程序名称、名称等。 2. 发送RTMP元数据 在发送音视频数据前,需要先发送一些元数据,用于描述音视频数据的格式及相关信息。RTMP协议中用AMF0或AMF3格式来传递元数据信息。例如,可以发送视频的宽度、高度、帧率、编码格式等信息。 3. 发送H264视频数据 发送H264视频数据时,需要将视频数据封装为FLV格式。FLV格式是一种常用的媒体视频格式,由头部、tag和数据三部分组成。其中,tag包含了视频的时间戳和数据大小等信息。每个tag的前4个字节表示tag类型,其中8表示视频tag类型。 4. 发送AAC音频数据 发送AAC音频数据也需要将音频数据封装为FLV格式。每个tag的前4个字节表示tag类型,其中10表示音频tag类型。音频tag中需要包含音频的时间戳和数据大小等信息。 以下是一个推送H264AAC数据的示例代码: ``` AVFormatContext *fmt_ctx; AVCodecContext *video_enc_ctx, *audio_enc_ctx; AVOutputFormat *ofmt; AVStream *video_stream, *audio_stream; AVPacket video_pkt, audio_pkt; int video_frame_count = 0, audio_frame_count = 0; // 初始化FFmpeg av_register_all(); avformat_network_init(); // 打开输出 avformat_alloc_output_context2(&fmt_ctx, NULL, "flv", "rtmp://server/live/stream"); ofmt = fmt_ctx->oformat; if (avio_open(&fmt_ctx->pb, fmt_ctx->filename, AVIO_FLAG_WRITE) < 0) { fprintf(stderr, "Failed to open output file\n"); exit(1); } // 添加视频和音频 video_stream = avformat_new_stream(fmt_ctx, NULL); audio_stream = avformat_new_stream(fmt_ctx, NULL); video_enc_ctx = video_stream->codec; audio_enc_ctx = audio_stream->codec; // 设置视频编码器 video_enc_ctx->codec_id = AV_CODEC_ID_H264; video_enc_ctx->bit_rate = 400000; video_enc_ctx->width = 640; video_enc_ctx->height = 480; video_enc_ctx->time_base = (AVRational){1, 25}; video_enc_ctx->gop_size = 10; video_stream->time_base = video_enc_ctx->time_base; // 设置音频编码器 audio_enc_ctx->codec_id = AV_CODEC_ID_AAC; audio_enc_ctx->bit_rate = 64000; audio_enc_ctx->sample_rate = 44100; audio_enc_ctx->channels = 2; audio_stream->time_base = (AVRational){1, audio_enc_ctx->sample_rate}; // 写入文件头 avformat_write_header(fmt_ctx, NULL); // 发送视频元数据 AVDictionary *video_options = NULL; av_dict_set(&video_options, "profile", "main", 0); av_dict_set(&video_options, "preset", "medium", 0); av_dict_set(&video_options, "tune", "zerolatency", 0); av_dict_set(&video_options, "crf", "23", 0); av_dict_set(&video_options, "level", "4.0", 0); av_dict_set(&video_options, "refs", "4", 0); av_dict_set(&video_options, "bufsize", "2M", 0); av_dict_set(&video_options, "maxrate", "400k", 0); av_dict_set(&video_options, "zerolatency", "1", 0); av_dict_set(&video_options, "threads", "4", 0); av_dict_set(&video_options, "bframes", "0", 0); av_dict_set(&video_options, "slices", "8", 0); avformat_write_header(fmt_ctx, &video_options); // 发送音频元数据 AVDictionary *audio_options = NULL; av_dict_set(&audio_options, "profile", "aac_low", 0); av_dict_set(&audio_options, "strict", "-2", 0); avformat_write_header(fmt_ctx, &audio_options); // 发送视频数据 while (1) { // 生成视频帧 AVFrame *video_frame = av_frame_alloc(); video_frame->width = video_enc_ctx->width; video_frame->height = video_enc_ctx->height; video_frame->format = AV_PIX_FMT_YUV420P; av_image_alloc(video_frame->data, video_frame->linesize, video_frame->width, video_frame->height, video_frame->format, 32); // 编码视频帧 avcodec_send_frame(video_enc_ctx, video_frame); while (avcodec_receive_packet(video_enc_ctx, &video_pkt) == 0) { // 将视频数据封装为FLV格式 video_pkt.pts = av_rescale_q(video_pkt.pts, video_enc_ctx->time_base, video_stream->time_base); video_pkt.dts = av_rescale_q(video_pkt.dts, video_enc_ctx->time_base, video_stream->time_base); video_pkt.duration = av_rescale_q(video_pkt.duration, video_enc_ctx->time_base, video_stream->time_base); video_pkt.stream_index = video_stream->index; av_interleaved_write_frame(fmt_ctx, &video_pkt); av_packet_unref(&video_pkt); } // 发送音频数据 AVFrame *audio_frame = av_frame_alloc(); audio_frame->nb_samples = audio_enc_ctx->frame_size; audio_frame->format = audio_enc_ctx->sample_fmt; audio_frame->channel_layout = audio_enc_ctx->channel_layout; av_frame_get_buffer(audio_frame, 0); // 填充音频数据 ... // 编码音频帧 avcodec_send_frame(audio_enc_ctx, audio_frame); while (avcodec_receive_packet(audio_enc_ctx, &audio_pkt) == 0) { // 将音频数据封装为FLV格式 audio_pkt.pts = av_rescale_q(audio_pkt.pts, audio_enc_ctx->time_base, audio_stream->time_base); audio_pkt.dts = av_rescale_q(audio_pkt.dts, audio_enc_ctx->time_base, audio_stream->time_base); audio_pkt.duration = av_rescale_q(audio_pkt.duration, audio_enc_ctx->time_base, audio_stream->time_base); audio_pkt.stream_index = audio_stream->index; av_interleaved_write_frame(fmt_ctx, &audio_pkt); av_packet_unref(&audio_pkt); } // 释放帧内存 av_frame_free(&video_frame); av_frame_free(&audio_frame); } // 写入文件尾 av_write_trailer(fmt_ctx); // 关闭输出 avio_close(fmt_ctx->pb); // 释放FFmpeg资源 avformat_free_context(fmt_ctx); ``` 注意,以上代码仅为示例,实际情况下需要根据具体的视频编码器和音频编码器来设置编码参数。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值