FFMPEG-0.11.1分析之ffmpeg流转码中的时间戳问题

【该文章,是属于ffmpeg的细枝末节,会写的比较啰嗦】

【不同的转码环境,会有代码流程的不同】


首先普及下:

时间戳,DTS(decoding time stamp),PTS(presention time stamp),CTS(current time stamp)。

ffmepg中的时间戳,是以微秒为单位,关乎timebase变量,它是作为dts、pts的时间基准粒度,数值会很大。

其中函数av_rescale_q()是很多的,AV_ROUND_NEAR_INF是就近、中间从零,av_rescale_rnd它是计算a*b/c,传入参数为八字节,为避免溢出,里面做了与INT_MAX的比较,分开计算。


先看前端的packets parsing,也就是av_read_frame函数:

const int genpts = s->flags & AVFMT_FLAG_GENPTS;
关于该flags,各标志位的说明在 avformat.h

//ffmpeg.c, opt_input_file()
ic->flags |= AVFMT_FLAG_NONBLOCK;
进入 read_frame_internal()函数,【该函数:http://www.chinavideo.org/viewthread.php?action=printable&tid=13846】
while (!got_packet && !s->parse_queue) {...} //got_packet有效则返回

在 ff_read_packet() 函数中,

ret= s->iformat->read_packet(s, pkt);
demux出一个包,在
 if(!pktl && st->request_probe <= 0)
返回。AVFrame:need_parsing此时是无效,
compute_pkt_fields(s, st, NULL, pkt);
got_packet = 1;
到此,we output pkt as it is。


来看看transcode中在parser和解码前做了什么,

if (pkt.dts != AV_NOPTS_VALUE && ist->next_dts != AV_NOPTS_VALUE && !copy_ts) {
    int64_t pkt_dts = av_rescale_q(pkt.dts, ist->st->time_base, AV_TIME_BASE_Q);
    int64_t delta   = pkt_dts - ist->next_dts;
			
    if (is->iformat->flags & AVFMT_TS_DISCONT) {
	if(delta < -1LL*dts_delta_threshold*AV_TIME_BASE ||
	    (delta > 1LL*dts_delta_threshold*AV_TIME_BASE &&
	    ist->st->codec->codec_type != AVMEDIA_TYPE_SUBTITLE) ||
	    pkt_dts+1<ist->pts){
		input_files[ist->file_index]->ts_offset -= delta;
		pkt.dts-= av_rescale_q(delta, AV_TIME_BASE_Q, ist->st->time_base);
		if (pkt.pts != AV_NOPTS_VALUE)
		    pkt.pts-= av_rescale_q(delta, AV_TIME_BASE_Q, ist->st->time_base);
	}
    } else {
	if ( delta < -1LL*dts_error_threshold*AV_TIME_BASE ||
                    (delta > 1LL*dts_error_threshold*AV_TIME_BASE && ist->st->codec->codec_type != AVMEDIA_TYPE_SUBTITLE) ||
                     pkt_dts+1<ist->pts){
	    pkt.dts = AV_NOPTS_VALUE;
	}
	if (pkt.pts != AV_NOPTS_VALUE){
	    int64_t pkt_pts = av_rescale_q(pkt.pts, ist->st->time_base, AV_TIME_BASE_Q);
	    delta   = pkt_pts - ist->next_dts;
	    if ( delta < -1LL*dts_error_threshold*AV_TIME_BASE ||
                        (delta > 1LL*dts_error_threshold*AV_TIME_BASE && ist->st->codec->codec_type != AVMEDIA_TYPE_SUBTITLE) ||
                        pkt_pts+1<ist->pts) {
 		pkt.pts = AV_NOPTS_VALUE;//fallenink:如果pts小了,这里置为无效。则在output_packet中置为ist->dts
            }
       	}
    }
}


再看解码 InputStream:【关注InputStream结构中的dts、next_dts、pts、next_pts】

在output_packet中,用saw_first_ts标记给dts、pts做一次性初始化,等

    if (!ist->saw_first_ts) {
        ist->dts = ist->st->avg_frame_rate.num ? - ist->st->codec->has_b_frames * AV_TIME_BASE / av_q2d(ist->st->avg_frame_rate) : 0;
        ist->pts = 0;
        if (pkt != NULL && pkt->pts != AV_NOPTS_VALUE && !ist->decoding_needed) {
            ist->dts += av_rescale_q(pkt->pts, ist->st->time_base, AV_TIME_BASE_Q);
            ist->pts = ist->dts; //unused but better to set it to a value thats not totally wrong
        }
        ist->saw_first_ts = 1;
    }

    if (ist->next_dts == AV_NOPTS_VALUE)
        ist->next_dts = ist->dts;
    if (ist->next_pts == AV_NOPTS_VALUE)
        ist->next_pts = ist->pts;
    if (pkt->dts != AV_NOPTS_VALUE) {//这里如果pkt的时戳被置为无效了,则不作ist的更新
        ist->next_dts = ist->dts = av_rescale_q(pkt->dts, ist->st->time_base, AV_TIME_BASE_Q);
        if (ist->st->codec->codec_type != AVMEDIA_TYPE_VIDEO || !ist->decoding_needed)
			//fallenink: "not video" or "copy"
            ist->next_pts = ist->pts = av_rescale_q(pkt->dts, ist->st->time_base, AV_TIME_BASE_Q);
    }
接着,音频直接是copy(In my case)
    if (!ist->decoding_needed) {
        rate_emu_sleep(ist);
        ist->dts = ist->next_dts;
        switch (ist->st->codec->codec_type) {
        case AVMEDIA_TYPE_AUDIO:
            ist->next_dts += ((int64_t)AV_TIME_BASE * ist->st->codec->frame_size) /
                             ist->st->codec->sample_rate;
            break;
        case AVMEDIA_TYPE_VIDEO:
	    //...
        }
        ist->pts = ist->dts;
        ist->next_pts = ist->next_dts;
    }
for (i = 0; pkt && i < nb_output_streams; i++) {
        OutputStream *ost = output_streams[i];
        if (!check_output_constraints(ist, ost) || ost->encoding_needed)
            continue;
        do_streamcopy(ist, ost, pkt);
    }
在解码前,
	ist->pts = ist->next_pts;
        ist->dts = ist->next_dts;

在decode_video函数中,

pkt->dts  = av_rescale_q(ist->dts, AV_TIME_BASE_Q, ist->st->time_base);//这里将输入流的时戳,更新给pkt

在avcodec_decode_video2中,单线程情况中,

    ret = avctx->codec->decode(avctx, picture, got_picture_ptr, &tmp);
    picture->pkt_dts= avpkt->dts;

如果解码有输出,则

if (*got_picture_ptr){
    avctx->frame_number++;
    picture->best_effort_timestamp = guess_correct_pts(avctx,
                                         picture->pkt_pts,
                                         picture->pkt_dts);
}

调用guess_correct_pts,一般就是返回picture->pkt_dts的值,再回到decode_video中,

    best_effort_timestamp = av_frame_get_best_effort_timestamp(decoded_frame);
    if(best_effort_timestamp != AV_NOPTS_VALUE)
        ist->next_pts = ist->pts = av_rescale_q(decoded_frame->pts = best_effort_timestamp, ist->st->time_base, AV_TIME_BASE_Q);

函数av_frame_get_best_effort_timestamp()定义在哪里呢?是这样的:

/* ./libavcodec/utils.c */

#define MAKE_ACCESSORS(str, name, type, field) \
    type av_##name##_get_##field(const str *s) { return s->field; } \
    void av_##name##_set_##field(str *s, type v) { s->field = v; }
继续向后看,在后处理前先做下预处理(某些编解码往往要加边的)
pre_process_video_frame(ist, (AVPicture *)decoded_frame, &buffer_to_free);
0.11.1版本中已经在用filter,替代原来的swscale模块了,
if (ist->dr1 && decoded_frame->type==FF_BUFFER_TYPE_USER && !changed) {//fallenink: "type" come from "codec_get_buffer"
    FrameBuffer      *buf = decoded_frame->opaque;
    AVFilterBufferRef *fb = avfilter_get_video_buffer_ref_from_arrays(
                                        decoded_frame->data, decoded_frame->linesize,
                                        AV_PERM_READ | AV_PERM_PRESERVE,
                                        ist->st->codec->width, ist->st->codec->height,
                                        ist->st->codec->pix_fmt);

	avfilter_copy_frame_props(fb, decoded_frame);
	fb->buf->priv           = buf;
	fb->buf->free           = filter_release_buffer;

	av_assert0(buf->refcount>0);
	buf->refcount++;
	av_buffersrc_add_ref(ist->filters[i]->filter, fb,
                                 AV_BUFFERSRC_FLAG_NO_CHECK_FORMAT |
                                 AV_BUFFERSRC_FLAG_NO_COPY);
        } else
	    if(av_buffersrc_add_frame(ist->filters[i]->filter, decoded_frame, 0)<0) {//fallenink: codec buffer copy to filter
		av_log(NULL, AV_LOG_FATAL, "Failed to inject frame into filter network\n");
		exit_program(1);
	    }

这里dr1和decoded_frame->type的type的初始化在init_input_stream中,可以看到,

	ist->dr1 = (codec->capabilities & CODEC_CAP_DR1) && !do_deinterlace;
        if (codec->type == AVMEDIA_TYPE_VIDEO && ist->dr1) {
            ist->st->codec->get_buffer     = codec_get_buffer;
            ist->st->codec->release_buffer = codec_release_buffer;
            ist->st->codec->opaque         = ist;
	}
capabilities对应于AVCodec结构体定义中给的值,比如 h264.c中,

AVCodec ff_h264_decoder = {
    .name                  = "h264",
    //...省略
    .capabilities          = /*CODEC_CAP_DRAW_HORIZ_BAND |*/ CODEC_CAP_DR1 |
                             CODEC_CAP_DELAY | CODEC_CAP_SLICE_THREADS |
                             CODEC_CAP_FRAME_THREADS,
    //...省略
};

接着将解码后的数据及参数扔进filter,后头再编码前av_buffersink_read就可以了,
avfilter_copy_frame_props(fb, decoded_frame);
            fb->buf->priv           = buf;
            fb->buf->free           = filter_release_buffer;


            av_assert0(buf->refcount>0);
            buf->refcount++;
            av_buffersrc_add_ref(ist->filters[i]->filter, fb,
                                 AV_BUFFERSRC_FLAG_NO_CHECK_FORMAT |
                                 AV_BUFFERSRC_FLAG_NO_COPY);

这样,时戳等信息也就在编码那边获取到了。decode_video函数返回之后,
if (avpkt.duration) {
                duration = av_rescale_q(avpkt.duration, ist->st->time_base, AV_TIME_BASE_Q);
            } else if(ist->st->codec->time_base.num != 0 && ist->st->codec->time_base.den != 0) {
                int ticks= ist->st->parser ? ist->st->parser->repeat_pict+1 : ist->st->codec->ticks_per_frame;
                duration = ((int64_t)AV_TIME_BASE *
                                ist->st->codec->time_base.num * ticks) /
                                ist->st->codec->time_base.den;
            } else
                duration = 0;


            if(ist->dts != AV_NOPTS_VALUE && duration) {
                ist->next_dts += duration;
            }else
                ist->next_dts = AV_NOPTS_VALUE;


            if (got_output)
                ist->next_pts += duration; //FIXME the duration is not correct in some cases

现在看看编码这一端poll_filters:从graph中取出一帧后,
frame_pts = AV_NOPTS_VALUE;
                if (picref->pts != AV_NOPTS_VALUE) {
                    filtered_frame->pts = frame_pts = av_rescale_q(picref->pts, ost->filter->filter->inputs[0]->time_base,
                                                    ost->st->codec->time_base) - av_rescale_q(of->start_time,
                                                    AV_TIME_BASE_Q,
                                                    ost->st->codec->time_base);
                    if (of->start_time && filtered_frame->pts < 0) {
                        avfilter_unref_buffer(picref);
                        continue;
                    }
                }
//...
avfilter_fill_frame_from_video_buffer_ref(filtered_frame, picref);
                    filtered_frame->pts = frame_pts;
进入到do_video_out()函数中,
if(ist && ist->st->start_time != AV_NOPTS_VALUE && ist->st->first_dts != AV_NOPTS_VALUE && ost->frame_rate.num)
        duration = 1/(av_q2d(ost->frame_rate) * av_q2d(enc->time_base));


    sync_ipts = in_picture->pts;
    delta = sync_ipts - ost->sync_opts + duration;
switch (format_video_sync) {
/*ost->sync_opts = lrint(sync_ipts);*/}
in_picture->pts = ost->sync_opts;
if (pkt.pts == AV_NOPTS_VALUE && !(enc->codec->capabilities & CODEC_CAP_DELAY))
                pkt.pts = ost->sync_opts;


            if (pkt.pts != AV_NOPTS_VALUE)
                pkt.pts = av_rescale_q(pkt.pts, enc->time_base, ost->st->time_base);
            if (pkt.dts != AV_NOPTS_VALUE)
                pkt.dts = av_rescale_q(pkt.dts, enc->time_base, ost->st->time_base);

接下来就不多说了,直接就送到mux,然后写到远端去了。


_______________________________________________无聊的分割线_________________________________________________
下面谈谈我遇到的问题吧,上面说了一些流程,没什么技术含量,只是作为一个梳理,不梳理很难发现并处理好细节问题。

现象一:流媒体转码中(前端为多线程解码),如果去掉ff_interleave_packet_per_dts()函数中的输出限制,改成有任意流都送去muxer,发现此时出去的流,视频是晚于(时戳值偏小)音频的。原因:正因为前端是多线程解码,如果不把线程数目控制好,cpu过载,会有packet缓冲在线程中,没有得到及时的处理,导致该现象。

现象二:基于现象一,改为单线程处理,如果多数时候,发出去的流,视频又是提前于音频的,甚至数秒。原因:(说明一下,这里的原因只是在下遇到的情况,不具普遍性。)服务器端一开始发了好一些视频的,时戳为0的包,小于了ist->next_dts则被替换,而ist->next_dts是按序,按duration_frame递增的,所以出现该情况,这里我在解码后将这样的情况,置为got output none。这儿是不能释放frame的,因为出来的几片内存实在codec中分配的,然后拷贝去了滤波器模块中。在我这里,导致了不同步问题,重点在这里,时戳问题是表面的,我们大家关注的还是音视频同步问题。

另外,在现象一的基础上,我这里是往fms推流,视频时戳总是晚于音频(或者早于音频很多),出现了一段时间后,rtmp的写端在write,select总是失败,具体原因还不明白。但是在现象二中,却不会出现这个问题。why???转码过程中,当然不会有严格处理同步的能力,它是以前端过来的packet时戳为根据的,但转码后端要做好音视频包的交错,也就是ff_interleave_packet_per_dts()这个函数所做的,当然如果mux本身提供interleave_packet则优先使用。

另外有个问题,就是当解码端收到,视频帧被拆分的情况,ffmpeg会连续收到几个包为pts相同的情况,下面这段代码关注下:

if(ist->dts != AV_NOPTS_VALUE && duration) {
                ist->next_dts += duration;
}else
                ist->next_dts = AV_NOPTS_VALUE;

if (got_output)
                ist->next_pts += duration; //FIXME the duration is not correct in some cases

按理说,当解码没有输出的时候,是不应该next_pts累加duration的。在我这里,视频需要编解码,音频直接copy,导致了不同步,以及在音视频交错那边视频帧pts增长过快,在那里囤积,导致客户端视频卡住。

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值