ffplay源码分析ffmpeg解码过程之avformat_find_stream_info、read_frame_internal、avpriv_packet_list_put接口

最近在研究ffmpeg和ffplay,发现网上关于ffmpeg解封装的源码分析不多而且不全。基于ffplay源码,来讲解一下ffmpeg解码过程。在这里主要是讲解一下AVFormatContext、AVStream、AVIOContext的数据结构中的重点变量例如 void*priv_data 、AVStreamInternal *internal、AVFormatInternal *internal等这些内部变量是最重要的,因为正是这些内部不公开的变量,才是存储文件内容、重要接口或属性、进行接口间传递的关键,要弄清楚ffmpeg这个库定义这些内部变量的真正意义,才是学习ffmpeg的关键。
但由于接口太多,无法在一篇博文写完,所以同学们需要关注一下其他博文哦 哈哈哈

所有接口和数据结构写的都很详细,但是研究了好一阵,写起来超级麻烦,好累的,看完给小弟点个关注呗 哈哈哈哈哈

重点重点小Tips:

  1. ffmpeg中很多结构体(AVStream、URLContext、AVFormatContext)很喜欢用void *priv_data或者void *opaque变量
    其实这个变量是用来存储该结构体 “子结构体”的,我这里更愿意称之为外部的协议或外部的资源(接口或数据)。例如在ffplay中需要进行解封装AVFormatContext通过探测文件格式得出AVInputFormat为mov(MP4)格式,AVFormatContext->priv_data= MOVContext 变量(mov.c文件), AVStream中的priv_data就是用来存储mov协议MOVStreamContext结构体变量的,URLContext中的priv_data就是用来存储file协议中FileContext结构体的,这样做其实是为了分离协议接口功能或数据和主干接口,使整个库有可扩展性。所以你会发现在各种协议接口的开始都会讲主干的priv_data赋值给协议自身的结构体中。如:mov_read_header当中的
    MOVContext *mov = s->priv_data;这样书写,也是一种语法糖,sc 不会受priv_data名称的影响。
    即使外部变量如命名有变化也会很少的影响内部接口。ffmpeg的接口大多都用到这种方式,尤其是涉及到一些外部协议
    rtmp流媒体、file文件、mov格式等。

  2. 对于Context这种命名比如:URLContext、FileContext、AVFormatContext等我个人的理解就是要完成功能所需要的数据+方法(接口)。如URLContext当中就有 file协议FileContext结构体里面有 open、close、read等方法和uint*data变量用来存储从文件当中读取的数据。这里是一级一级存储的,为了代码有更好的扩展性,这种库是好多人写的呀。不知道我解释清楚没有,哈哈哈哈哈哈哈。

  3. 对于internal这种命名如AVStreamInternal,一般是用来存储数据并传递给接口使用的,例如AVStreamInternal用来存储音视频sample的信息(位置、大小、pts、dts)和编解码器数据接口,AVFormatInternal用来存储(通过AVStreamInternal索引sample得出)AVPacket内容。主要是用来存储传递数据的。

在这里我想重点说一下AVFormatContext这个的个人理解:
AVFormatContext结构体是ffmpeg对接解封装的结构体,也就是直接读取(无论是本地file文件还是网络流媒体)文件或资源,并且按照一定结构体要求(AVPacket用来存储一帧或一个sample信息)
来整理存储从多媒体文件中读出的数据。当然,以AVPacket结构体形式存储sample信息,并进行解码。所以在AVFormatContext中,既然要进行解封装就必须要有读取文件IO接口 AVIOContext、当读取多媒体文件head信息时产生的音视频metadata信息存在 AVStream当中。这里一定要注意:无论是AVIOContext还是AVStream都是对接的下层协议,比如AVIOContext->priv_data=FileContext(file.c)
URLProtocol ff_file_protocol存有读取文件的接口并且FileContext可以存储文件句柄,AVStream->internal->index_entries用来存储每个人sample信息的。然而,在读取一个sample完后(比如mov(mp4)通过读head找到每个sample的信息存储在AVStream->internal->index_entries中),都需要以AVPacket方式存储在AVFormatInternal->AVPacketList *packet_buffer_end中,之后进行解码。
所以从这点看,AVFormatContext这个上层结构体是不和下层协议(FileContext、MOVContext、MOVStreamContext)直接对接的,这也是我在上面小Tips和前面博文中讲到的,ffmpeg这个库是需要很多人写的,必须要有扩展性,语法糖之类的,下层的变动不会影响上层的结构。

其实ffmpeg的命名也是很合理,AVFormatContext作用简单就是读取文件放到链表AVFormatInternal->AVPacketList *packet_buffer_end中,等待解码。AVStream、AVIOContext都是为此作用而服务的。

在这里插入图片描述

重点接口啦哈哈哈哈

小Tips: avformat_find_stream_info在循环读取packet时,是在if (st->internal->info->frame_delay_evidence && count < 2 && st->internal->avctx->has_b_frames == 0) 地方跳出的,当一个pts!=dts,也就是说sample既有pts又有dts时frame_delay_evidence被赋值为1。这里太重要了,原先我就是因为忽略了这里,老是找不到是在哪里跳出循环的。其实读取packet,只用读取一个packet就行了,这里一定要注意呀呀呀呀,以后还是要多看几遍,看细一点

//读取一些packet,赋值解码器接口,从AVStream中赋值一些参数给AVCodecContext*Internal,用于接下来的解码工作
int avformat_find_stream_info(AVFormatContext *ic, AVDictionary **options)
{
    int i, count = 0, ret = 0, j;
    int64_t read_size;
    AVStream *st;
    AVCodecContext *avctx;
    AVPacket pkt1;
    int64_t old_offset  = avio_tell(ic->pb); //读完文件head后,当前指针指向与文件开始位置的偏移量
    // new streams might appear, no options for those
    int orig_nb_streams = ic->nb_streams; //从head信息读取,一般2个stream
    int flush_codecs;
    int64_t max_analyze_duration = ic->max_analyze_duration;
    int64_t max_stream_analyze_duration;
    int64_t max_subtitle_analyze_duration;
    int64_t probesize = ic->probesize;
    int eof_reached = 0;
    int *missing_streams = av_opt_ptr(ic->iformat->priv_class, ic->priv_data, "missing_streams");

    flush_codecs = probesize > 0;

    av_opt_set(ic, "skip_clear", "1", AV_OPT_SEARCH_CHILDREN);

    max_stream_analyze_duration = max_analyze_duration;
    max_subtitle_analyze_duration = max_analyze_duration;
    if (!max_analyze_duration) {
        max_stream_analyze_duration =
        max_analyze_duration        = 5*AV_TIME_BASE;
        max_subtitle_analyze_duration = 30*AV_TIME_BASE;
        if (!strcmp(ic->iformat->name, "flv"))
            max_stream_analyze_duration = 90*AV_TIME_BASE;
        if (!strcmp(ic->iformat->name, "mpeg") || !strcmp(ic->iformat->name, "mpegts"))
            max_stream_analyze_duration = 7*AV_TIME_BASE;
    }

    //将AVStream中的internal->avctx赋值,包括参数time_base,作用是方便编写代码
    for (i = 0; i < ic->nb_streams; i++) {
        const AVCodec *codec;
        AVDictionary *thread_opt = NULL;
        st = ic->streams[i];
        avctx = st->internal->avctx;

        if (st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO ||
            st->codecpar->codec_type == AVMEDIA_TYPE_SUBTITLE) {
/*            if (!st->time_base.num)
                st->time_base = */
            if (!avctx->time_base.num)
                avctx->time_base = st->time_base;
        }
        ...
        // only for the split stuff
        //根据AVStream*st中解码器id,赋值对应的解析器
        if (!st->parser && !(ic->flags & AVFMT_FLAG_NOPARSE) && st->internal->request_probe <= 0) {
            st->parser = av_parser_init(st->codecpar->codec_id);
            if (st->parser) {
                if (st->need_parsing == AVSTREAM_PARSE_HEADERS) {
                    st->parser->flags |= PARSER_FLAG_COMPLETE_FRAMES;
                } else if (st->need_parsing == AVSTREAM_PARSE_FULL_RAW) {
                    st->parser->flags |= PARSER_FLAG_USE_CODEC_TS;
                }
            } else if (st->need_parsing) {
                av_log(ic, AV_LOG_VERBOSE, "parser not found for codec "
                       "%s, packets or times may be invalid.\n",
                       avcodec_get_name(st->codecpar->codec_id));
            }
        }

        if (st->codecpar->codec_id != st->internal->orig_codec_id)
            st->internal->orig_codec_id = st->codecpar->codec_id;

        //将AVStream*st中的一些参数值赋值给AVCodecContext *avctx
        ret = avcodec_parameters_to_context(avctx, st->codecpar);
        if (ret < 0)
            goto find_stream_info_err;
        if (st->internal->request_probe <= 0)
            st->internal->avctx_inited = 1;

        //赋值解码器接口
        codec = find_probe_decoder(ic, st, st->codecpar->codec_id);

        /* Force thread count to 1 since the H.264 decoder will not extract
         * SPS and PPS to extradata during multi-threaded decoding. */
        av_dict_set(options ? &options[i] : &thread_opt, "threads", "1", 0);

        if (ic->codec_whitelist)
            av_dict_set(options ? &options[i] : &thread_opt, "codec_whitelist", ic->codec_whitelist, 0);
         ...

    for (i = 0; i < ic->nb_streams; i++) {
#if FF_API_R_FRAME_RATE
        ic->streams[i]->internal->info->last_dts = AV_NOPTS_VALUE;
#endif
        ic->streams[i]->internal->info->fps_first_dts = AV_NOPTS_VALUE;
        ic->streams[i]->internal->info->fps_last_dts  = AV_NOPTS_VALUE;
    }

    read_size = 0;//已读文件取大小

    //小Tips:采用 for (;;)因为在编译阶段汇编器会把 for (;;)优化掉,直接进入循环
    for (;;) {
        const AVPacket *pkt;
        int analyzed_all_streams;
        if (ff_check_interrupt(&ic->interrupt_callback)) {
            ret = AVERROR_EXIT;
            av_log(ic, AV_LOG_DEBUG, "interrupted\n");
            break;
        }
        ...

            当一个pts!=dts,也就是说sample既有pts又有dts时frame_delay_evidence被赋值为1。
            这里太重要了,原先我就是因为忽略了这里,老是找不到是在哪里跳出循环的。其实读取packet,只用读取一个packet就行了,
            这里一定要注意呀呀呀呀,以后还是要多看几遍,看细一点
            //跳出读取packet循环
            if (st->internal->info->frame_delay_evidence && count < 2 && st->internal->avctx->has_b_frames == 0)
            //跳出读取packet循环
                break;
            if (!st->internal->avctx->extradata &&
                (!st->internal->extract_extradata.inited ||
                 st->internal->extract_extradata.bsf) &&
                extract_extradata_check(st))
                break;
            if (st->first_dts == AV_NOPTS_VALUE &&
                !(ic->iformat->flags & AVFMT_NOTIMESTAMPS) &&
                st->codec_info_nb_frames < ((st->disposition & AV_DISPOSITION_ATTACHED_PIC) ? 1 : ic->max_ts_probe) &&
                (st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO ||
                 st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO))
                break;
        }
        analyzed_all_streams = 0;
        ...
       ...
        /* NOTE: A new stream can be added there if no header in file
         * (AVFMTCTX_NOHEADER). */
        //从文件中读取一个sample并赋值给AVPacket->pkt1
        ret = read_frame_internal(ic, &pkt1);
        if (ret == AVERROR(EAGAIN))
            continue;

        if (ret < 0) {
            /* EOF or error*/
            eof_reached = 1;
            break;
        }

        //将pkt1赋值给AVFormatContext *internal->packet_buffer,每次读取都会从packet_buffer中读取
        //当packet_buffer_end为NULL时,将packet加入packet_buffer中
        if (!(ic->flags & AVFMT_FLAG_NOBUFFER)) {
            ret = avpriv_packet_list_put(&ic->internal->packet_buffer,
                                     &ic->internal->packet_buffer_end,
                                     &pkt1, NULL, 0);
            if (ret < 0)
                goto unref_then_goto_end;

            pkt = &ic->internal->packet_buffer_end->pkt;
        } else {
            pkt = &pkt1;
        }

        st = ic->streams[pkt->stream_index];
        if (!(st->disposition & AV_DISPOSITION_ATTACHED_PIC))
            read_size += pkt->size;

        avctx = st->internal->avctx;
       ...

        if (pkt->dts != AV_NOPTS_VALUE && st->codec_info_nb_frames > 1) {
            /* check for non-increasing dts */
            if (st->internal->info->fps_last_dts != AV_NOPTS_VALUE &&
                st->internal->info->fps_last_dts >= pkt->dts) {
                ...
                st->internal->info->fps_first_dts =
                st->internal->info->fps_last_dts  = AV_NOPTS_VALUE;
            }
            /* Check for a discontinuity in dts. If the difference in dts
             * is more than 1000 times the average packet duration in the
             * sequence, we treat it as a discontinuity. */
            if (st->internal->info->fps_last_dts != AV_NOPTS_VALUE &&
                st->internal->info->fps_last_dts_idx > st->internal->info->fps_first_dts_idx &&
                (pkt->dts - (uint64_t)st->internal->info->fps_last_dts) / 1000 >
                (st->internal->info->fps_last_dts     - (uint64_t)st->internal->info->fps_first_dts) /
                (st->internal->info->fps_last_dts_idx - st->internal->info->fps_first_dts_idx)) {
                ...
                st->internal->info->fps_first_dts =
                st->internal->info->fps_last_dts  = AV_NOPTS_VALUE;
            }

            /* update stored dts values */
            if (st->internal->info->fps_first_dts == AV_NOPTS_VALUE) {
                st->internal->info->fps_first_dts     = pkt->dts;
                st->internal->info->fps_first_dts_idx = st->codec_info_nb_frames;
            }
            st->internal->info->fps_last_dts     = pkt->dts;
            st->internal->info->fps_last_dts_idx = st->codec_info_nb_frames;
        }
        if (st->codec_info_nb_frames>1) {
            int64_t t = 0;
            int64_t limit;

            if (st->time_base.den > 0)
                t = av_rescale_q(st->internal->info->codec_info_duration, st->time_base, AV_TIME_BASE_Q);
            if (st->avg_frame_rate.num > 0)
                t = FFMAX(t, av_rescale_q(st->codec_info_nb_frames, av_inv_q(st->avg_frame_rate), AV_TIME_BASE_Q));

            if (   t == 0
                && st->codec_info_nb_frames>30
                && st->internal->info->fps_first_dts != AV_NOPTS_VALUE
                && st->internal->info->fps_last_dts  != AV_NOPTS_VALUE)
                t = FFMAX(t, av_rescale_q(st->internal->info->fps_last_dts - st->internal->info->fps_first_dts, st->time_base, AV_TIME_BASE_Q));

            if (analyzed_all_streams)                                limit = max_analyze_duration;
            else if (avctx->codec_type == AVMEDIA_TYPE_SUBTITLE) limit = max_subtitle_analyze_duration;
            else                                                     limit = max_stream_analyze_duration;

            if (t >= limit) {
                av_log(ic, AV_LOG_VERBOSE, "max_analyze_duration %"PRId64" reached at %"PRId64" microseconds st:%d\n",
                       limit,
                       t, pkt->stream_index);
                if (ic->flags & AVFMT_FLAG_NOBUFFER)
                    av_packet_unref(&pkt1);
                break;
            }
            if (pkt->duration) {
                if (avctx->codec_type == AVMEDIA_TYPE_SUBTITLE && pkt->pts != AV_NOPTS_VALUE && st->start_time != AV_NOPTS_VALUE && pkt->pts >= st->start_time) {
                    st->internal->info->codec_info_duration = FFMIN(pkt->pts - st->start_time, st->internal->info->codec_info_duration + pkt->duration);
                } else
                    st->internal->info->codec_info_duration += pkt->duration;
                st->internal->info->codec_info_duration_fields += st->parser && st->need_parsing && avctx->ticks_per_frame ==2 ? st->parser->repeat_pict + 1 : 2;
            }
        }
        if (st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) {
#if FF_API_R_FRAME_RATE
            ff_rfps_add_frame(ic, st, pkt->dts);
#endif


            if (pkt->dts != pkt->pts && pkt->dts != AV_NOPTS_VALUE && pkt->pts != AV_NOPTS_VALUE)
            
               这块太重要了我看了好久都没注意,这里的赋值是为了
               用于跳出循环,正常情况下只用读一个sample就可以了跳出了
                st->internal->info->frame_delay_evidence = 1;
        }

        if (!st->internal->avctx->extradata) {
            ret = extract_extradata(st, pkt);
            if (ret < 0)
                goto unref_then_goto_end;
        }

        /* If still no information, we try to open the codec and to
         * decompress the frame. We try to avoid that in most cases as
         * it takes longer and uses more memory. For MPEG-4, we need to
         * decompress for QuickTime.
         *
         * If AV_CODEC_CAP_CHANNEL_CONF is set this will force decoding of at
         * least one frame of codec data, this makes sure the codec initializes
         * the channel configuration and does not only trust the values from
         * the container. */
         //这里仅仅就是把packet解码,其实在这里解码感觉没啥用哈哈
         //因为开始几个sample解码都是失败的
        try_decode_frame(ic, st, pkt,
                         (options && i < orig_nb_streams) ? &options[i] : NULL);

        if (ic->flags & AVFMT_FLAG_NOBUFFER)
            av_packet_unref(&pkt1);

        st->codec_info_nb_frames++; //记录读取几个packet
        count++;
    }
    ....
其实到这里,avformat_find_stream_info的主要功能就结束了,就是读取一些packet,赋值解码器接口,
从AVStream中赋值一些参数给AVCodecContext*Internal,用于接下来的解码工作,下面的代码,
就是在head不全的情况下计算一下帧率,赋值一些参数的功能,最主要的还是上面的代码,
尤其是 st->internal->info->frame_delay_evidence = 1 赋值后在
if (st->internal->info->frame_delay_evidence && count < 2 &&
 st->internal->avctx->has_b_frames == 0)跳出,太关键了,我看了2周都忽略这里了,大家一定要注意,
 变量太多,还是要多看几遍 哈哈哈


   //计算一个帧率
    ff_rfps_calculate(ic);

   //这块主要是根据,没有head或者在head中没读到帧率信息,需要计算估计帧率
    for (i = 0; i < ic->nb_streams; i++) {
        st = ic->streams[i];
        avctx = st->internal->avctx;
        if (avctx->codec_type == AVMEDIA_TYPE_VIDEO) {
            if (avctx->codec_id == AV_CODEC_ID_RAWVIDEO && !avctx->codec_tag && !avctx->bits_per_coded_sample) {
                uint32_t tag= avcodec_pix_fmt_to_codec_tag(avctx->pix_fmt);
                if (avpriv_find_pix_fmt(avpriv_get_raw_pix_fmt_tags(), tag) == avctx->pix_fmt)
                    avctx->codec_tag= tag;
            }

            /* estimate average framerate if not set by demuxer */
            if (st->internal->info->codec_info_duration_fields &&
                !st->avg_frame_rate.num &&
                st->internal->info->codec_info_duration) {
                ...
            }
        } else if (avctx->codec_type == AVMEDIA_TYPE_AUDIO) {
            if (!avctx->bits_per_coded_sample)
                avctx->bits_per_coded_sample =
                    av_get_bits_per_sample(avctx->codec_id);
            ...
            }
        }
    }

    if (probesize)
        estimate_timings(ic, old_offset);

    av_opt_set(ic, "skip_clear", "0", AV_OPT_SEARCH_CHILDREN);

    if (ret >= 0 && ic->nb_streams)
        /* We could not have all the codec parameters before EOF. */
        ret = -1;
    for (i = 0; i < ic->nb_streams; i++) {
        const char *errmsg;
        st = ic->streams[i];
    ...
    }

    compute_chapters_end(ic);

   //从AVStream中赋值一些参数
    /* update the stream parameters from the internal codec contexts */
    for (i = 0; i < ic->nb_streams; i++) {
        st = ic->streams[i];

        if (st->internal->avctx_inited) {
            int orig_w = st->codecpar->width;
            int orig_h = st->codecpar->height;
            ret = avcodec_parameters_from_context(st->codecpar, st->internal->avctx);
            if (ret < 0)
                goto find_stream_info_err;
            ret = add_coded_side_data(st, st->internal->avctx);
            if (ret < 0)
                goto find_stream_info_err;
#if FF_API_LOWRES
            // The decoder might reduce the video size by the lowres factor.
            if (st->internal->avctx->lowres && orig_w) {
                st->codecpar->width = orig_w;
                st->codecpar->height = orig_h;
            }
#endif
        }
        ....
        st->internal->avctx_inited = 0;
    }

find_stream_info_err:
    for (i = 0; i < ic->nb_streams; i++) {
        st = ic->streams[i];
        if (st->internal->info)
            av_freep(&st->internal->info->duration_error);
        avcodec_close(ic->streams[i]->internal->avctx);
        av_freep(&ic->streams[i]->internal->info);
        av_bsf_free(&ic->streams[i]->internal->extract_extradata.bsf);
        av_packet_free(&ic->streams[i]->internal->extract_extradata.pkt);
    }
    if (ic->pb)
        av_log(ic, AV_LOG_DEBUG, "After avformat_find_stream_info() pos: %"PRId64" bytes read:%"PRId64" seeks:%d frames:%d\n",
               avio_tell(ic->pb), ic->pb->bytes_read, ic->pb->seek_count, count);
    return ret;

unref_then_goto_end:
    av_packet_unref(&pkt1);
    goto find_stream_info_err;
}

读取一个packet接口

static int read_frame_internal(AVFormatContext *s, AVPacket *pkt)
{
    int ret, i, got_packet = 0;//已获得的packet标识,为了跳出循环
    AVDictionary *metadata = NULL;

   //got_packet用来跳出循环
    while (!got_packet && !s->internal->parse_queue) {
        AVStream *st;

        /* read next packet */
        //每次只读取一个packet
        ret = ff_read_packet(s, pkt);
        ..
        ret = 0;
        st  = s->streams[pkt->stream_index];

        st->event_flags |= AVSTREAM_EVENT_FLAG_NEW_PACKETS;

        /* update context if required */
        //给AVCodecContext*Internal赋值一些参数
        if (st->internal->need_context_update) {
            if (avcodec_is_open(st->internal->avctx)) {
                av_log(s, AV_LOG_DEBUG, "Demuxer context update while decoder is open, closing and trying to re-open\n");
                avcodec_close(st->internal->avctx);
                st->internal->info->found_decoder = 0;
            }

            /* close parser, because it depends on the codec */
            if (st->parser && st->internal->avctx->codec_id != st->codecpar->codec_id) {
                av_parser_close(st->parser);
                st->parser = NULL;
            }

            ret = avcodec_parameters_to_context(st->internal->avctx, st->codecpar);
            if (ret < 0) {
                av_packet_unref(pkt);
                return ret;
            }
            ......
            st->internal->need_context_update = 0;
        }

        if (!st->need_parsing || !st->parser) {
            /* no parsing needed: we just output the packet as is */
            //矫正一些packet时间参数
            compute_pkt_fields(s, st, NULL, pkt, AV_NOPTS_VALUE, AV_NOPTS_VALUE);
            if ((s->iformat->flags & AVFMT_GENERIC_INDEX) &&
                (pkt->flags & AV_PKT_FLAG_KEY) && pkt->dts != AV_NOPTS_VALUE) {
                ff_reduce_index(s, st->index);
                av_add_index_entry(st, pkt->pos, pkt->dts,
                                   0, 0, AVINDEX_KEYFRAME);
            }
            got_packet = 1; //获得packet标识赋值为1,为了跳出循环
        } else if (st->discard < AVDISCARD_ALL) {
            if ((ret = parse_packet(s, pkt, pkt->stream_index, 0)) < 0)
                return ret;
            st->codecpar->sample_rate = st->internal->avctx->sample_rate;
            st->codecpar->bit_rate = st->internal->avctx->bit_rate;
            st->codecpar->channels = st->internal->avctx->channels;
            st->codecpar->channel_layout = st->internal->avctx->channel_layout;
            st->codecpar->codec_id = st->internal->avctx->codec_id;
        } else {
            /* free packet */
            av_packet_unref(pkt);
        }
        if (pkt->flags & AV_PKT_FLAG_KEY)
            st->internal->skip_to_keyframe = 0;
        if (st->internal->skip_to_keyframe) {
            av_packet_unref(pkt);
            got_packet = 0;
        }
    }

    if (!got_packet && s->internal->parse_queue)
        ret = avpriv_packet_list_get(&s->internal->parse_queue, &s->internal->parse_queue_end, pkt);

   //主要是针对一些舍弃的sample的处理,暂时可以忽略
    if (ret >= 0) {
        AVStream *st = s->streams[pkt->stream_index];
        int discard_padding = 0;
        if (st->internal->first_discard_sample && pkt->pts != AV_NOPTS_VALUE) {
            int64_t pts = pkt->pts - (is_relative(pkt->pts) ? RELATIVE_TS_BASE : 0);
            int64_t sample = ts_to_samples(st, pts);
            int duration = ts_to_samples(st, pkt->duration);
            int64_t end_sample = sample + duration;
            if (duration > 0 && end_sample >= st->internal->first_discard_sample &&
                sample < st->internal->last_discard_sample)
                discard_padding = FFMIN(end_sample - st->internal->first_discard_sample, duration);
        }
        if (st->internal->start_skip_samples && (pkt->pts == 0 || pkt->pts == RELATIVE_TS_BASE))
            st->internal->skip_samples = st->internal->start_skip_samples;
        if (st->internal->skip_samples || discard_padding) {
            uint8_t *p = av_packet_new_side_data(pkt, AV_PKT_DATA_SKIP_SAMPLES, 10);
            if (p) {
                AV_WL32(p, st->internal->skip_samples);
                AV_WL32(p + 4, discard_padding);
                av_log(s, AV_LOG_DEBUG, "demuxer injecting skip %d / discard %d\n", st->internal->skip_samples, discard_padding);
            }
            st->internal->skip_samples = 0;
        }

        if (st->internal->inject_global_side_data) {
            for (i = 0; i < st->nb_side_data; i++) {
                AVPacketSideData *src_sd = &st->side_data[i];
                uint8_t *dst_data;

                if (av_packet_get_side_data(pkt, src_sd->type, NULL))
                    continue;

                dst_data = av_packet_new_side_data(pkt, src_sd->type, src_sd->size);
                if (!dst_data) {
                    av_log(s, AV_LOG_WARNING, "Could not inject global side data\n");
                    continue;
                }

                memcpy(dst_data, src_sd->data, src_sd->size);
            }
            st->internal->inject_global_side_data = 0;
        }
    }

    av_opt_get_dict_val(s, "metadata", AV_OPT_SEARCH_CHILDREN, &metadata);
    if (metadata) {
        s->event_flags |= AVFMT_EVENT_FLAG_METADATA_UPDATED;
        av_dict_copy(&s->metadata, metadata, 0);
        av_dict_free(&metadata);
        av_opt_set_dict_val(s, "metadata", NULL, AV_OPT_SEARCH_CHILDREN);
    }
...
    /* A demuxer might have returned EOF because of an IO error, let's
     * propagate this back to the user. */
    if (ret == AVERROR_EOF && s->pb && s->pb->error < 0 && s->pb->error != AVERROR(EAGAIN))
        ret = s->pb->error;

    return ret;
}

调用协议接口读取packet具体如下
ffmpeg解封装mov/mp4格式解封装源码分析之mov_read_header(读取metadata)、mov_read_packet(读取sample数据)、mov_read_trak)中有介绍

//调用协议接口读取packet
int ff_read_packet(AVFormatContext *s, AVPacket *pkt)
{
    int ret, i, err;
    AVStream *st;

    pkt->data = NULL;
    pkt->size = 0;
    av_init_packet(pkt); //给AVPacket赋初值

    //小Tips:采用 for (;;)因为在编译阶段汇编器会把 for (;;)优化掉,直接进入循环
    for (;;) {
        AVPacketList *pktl = s->internal->raw_packet_buffer;
        const AVPacket *pkt1;

        if (pktl) {
            st = s->streams[pktl->pkt.stream_index];
            if (s->internal->raw_packet_buffer_remaining_size <= 0)
                if ((err = probe_codec(s, st, NULL)) < 0)
                    return err;
            if (st->internal->request_probe <= 0) {
               //获取raw_packet_buffer中的packet
                avpriv_packet_list_get(&s->internal->raw_packet_buffer,
                                   &s->internal->raw_packet_buffer_end, pkt);
                s->internal->raw_packet_buffer_remaining_size += pkt->size;
                return 0;
            }
        }

        //根据AVInputFormat调用相应的外部读取packet接口
        //具体可以查看上方链接
//博文ffmpeg解封装mov/mp4格式解封装源码分析之mov_read_header(读取metadata)、mov_read_packet(读取sample数据)、mov_read_trak
        ret = s->iformat->read_packet(s, pkt);
        if (ret < 0) { //小于0 释放packet
            ...
        }

        //pkt->buf开辟内存,初始化
        //将pkt->data内容赋值给pkt->buf->data,我个人对这个接口的理解是,从AVPacket中取packet内容统一从pkt->buf中取
        err = av_packet_make_refcounted(pkt);
        if (err < 0) {
            av_packet_unref(pkt);
            return err;
        }

        st = s->streams[pkt->stream_index];
        ...
        pkt->dts = wrap_timestamp(st, pkt->dts);
        pkt->pts = wrap_timestamp(st, pkt->pts);

        force_codec_ids(s, st);

        /* TODO: audio: time filter; video: frame reordering (pts != dts) */
        if (s->use_wallclock_as_timestamps)
            pkt->dts = pkt->pts = av_rescale_q(av_gettime(), AV_TIME_BASE_Q, st->time_base);

        if (!pktl && st->internal->request_probe <= 0)
        //在这里返回
            return ret;

在一般情况下,走到这里就返回了,在有packet并且不是request_probe的情况下返回,所以s->internal->raw_packet_buffer一般为NULL

       //将packet赋值给raw_packet_buffer
       //当raw_packet_buffer_end为空时,raw_packet_buffer被赋值
        err = avpriv_packet_list_put(&s->internal->raw_packet_buffer,
                                 &s->internal->raw_packet_buffer_end,
                                 pkt, NULL, 0);
        if (err < 0) {
            av_packet_unref(pkt);
            return err;
        }
        pkt1 = &s->internal->raw_packet_buffer_end->pkt;
        s->internal->raw_packet_buffer_remaining_size -= pkt1->size;

        if ((err = probe_codec(s, st, pkt1)) < 0)
            return err;
    }
}

//pkt->buf开辟内存,初始化
//将pkt->data内容赋值给pkt->buf->data,我个人对这个接口的理解是,从AVPacket中取packet内容统一从pkt->buf中取
int av_packet_make_refcounted(AVPacket *pkt)
{
    int ret;

    if (pkt->buf)
        return 0;

    ret = packet_alloc(&pkt->buf, pkt->size); //开辟内存
    if (ret < 0)
        return ret;
    av_assert1(!pkt->size || pkt->data);
    if (pkt->size)
        memcpy(pkt->buf->data, pkt->data, pkt->size);//赋值

    pkt->data = pkt->buf->data;

    return 0;
}

比较重要的小Tips:
上面讲到了AVFormatInternal->AVPacketList 是用来存储解封装后数据的,AVPacketList 的赋值采取的是队列的方式,而ffplay也正是模仿采用了这种方式,所以在这里把两个结合在一起讲一下

ffmpeg队列赋值和获取

//plast_pktl变量是AVPacketList的尾指针,当给新的链表插入元素后,plast_pktl自动向后移动
//当plast_pktl为NULL时,将packet加入packet_buffer中,每次需要读取packet_buffer内容
int avpriv_packet_list_put(AVPacketList **packet_buffer, AVPacketList **plast_pktl , AVPacket *pkt,
                                            int (*copy)(AVPacket *dst,  const AVPacket *src), int flags)
{
    AVPacketList *pktl = av_mallocz(sizeof(AVPacketList));//开辟内存空间
    int ret;

    if (!pktl)
        return AVERROR(ENOMEM);

    if (copy) {
        ret = copy(&pktl->pkt, pkt);
        if (ret < 0) {
            av_free(pktl);
            return ret;
        }
    } else {
        //将pkt->data的值赋值给pkt->buf->data
        ret = av_packet_make_refcounted(pkt);
        if (ret < 0) {
            av_free(pktl);
            return ret;
        }
        av_packet_move_ref(&pktl->pkt, pkt);
    }
   //plast_pktl变量是AVPacketList的尾指针,当给新的链表插入元素后,plast_pktl自动向后移动
   //当plast_pktl为NULL时,将packet加入packet_buffer中,每次需要读取packet_buffer内容
    if (*packet_buffer) 
        (*plast_pktl)->next = pktl;
    else
        *packet_buffer = pktl; //当plast_pktl为空时,pktl赋值给packet_buffer 

    /* Add the packet in the buffered packet list. */
    *plast_pktl = pktl; //移动链表指针
    return 0;
}
//获取队列packet
//当pkt_buffer为空时,AVPacketList尾指针pkt_buffer_end赋值为空
int avpriv_packet_list_get(AVPacketList **pkt_buffer,
                           AVPacketList **pkt_buffer_end,
                           AVPacket      *pkt)
{
    AVPacketList *pktl;
    if (!*pkt_buffer)
        return AVERROR(EAGAIN);
    pktl        = *pkt_buffer;
    *pkt        = pktl->pkt;
    *pkt_buffer = pktl->next;//移动链表指针
    if (!pktl->next)
    //当pkt_buffer 为空时,将AVPacketList尾指针pkt_buffer_end 赋值为空
        *pkt_buffer_end = NULL;
    av_freep(&pktl);
    return 0;
}

ffplay队列赋值和获取

//ffplay队列结构
typedef struct PacketQueue {
    //first_pkt为队列的head,每次从队列中取packet时,都从first_pkt里面取
    //last_pkt变量是MyAVPacketList的尾指针,当给新的链表插入元素后,q->last_pkt自动向后移动,
    //当last_pkt为空时,才会给first_pkt赋值packet
    MyAVPacketList *first_pkt, *last_pkt;
    int nb_packets;//队列一共有多少packet
    int size; //队列大小
    int64_t duration; //队列时长,也就是packet的时长
    int abort_request; //是否放弃赋值标识
    int serial; //队列序号
    SDL_mutex *mutex; //队列的锁,因为队列的赋值都是在多线程中
    SDL_cond *cond; //信号量用于同步
} PacketQueue;
//给PacketQueue *q->first_pkt链表赋值
//激活信号量进行解码
//这个接口仿照ffmpeg中的avpriv_packet_list_put接口来写的
//其中q->last_pkt变量是MyAVPacketList的尾指针,当给新的链表插入元素后,q->last_pkt自动向后移动
//当last_pkt为NULL时,将packet加入first_pkt中,每次需要读取first_pkt内容
static int packet_queue_put_private(PacketQueue *q, AVPacket *pkt)
{
    MyAVPacketList *pkt1;

    if (q->abort_request)//放弃赋值标识
       return -1;

    pkt1 = av_malloc(sizeof(MyAVPacketList));//链表开辟内存
    if (!pkt1)
        return -1;
    pkt1->pkt = *pkt;
    pkt1->next = NULL;
    if (pkt == &flush_pkt)//如果是刷新packet的话,这个队列序号加一,也就是被视为新的队列
        q->serial++;
    pkt1->serial = q->serial; //更新pkt1所在的队列

   //q->last_pkt变量可以看做是一个标识
   //当last_pkt为NULL时,将packet加入first_pkt中,每次需要读取first_pkt内容
    if (!q->last_pkt)
        q->first_pkt = pkt1;
    else
        q->last_pkt->next = pkt1;
    q->last_pkt = pkt1;//移动last_pkt 链表指针
    q->nb_packets++;  //这个队列已经有的packe数量
    q->size += pkt1->pkt.size + sizeof(*pkt1);
    q->duration += pkt1->pkt.duration; //队列持续时间
    /* XXX: should duplicate packet data in DV case */
    SDL_CondSignal(q->cond); //激活信号量进行解码
    return 0;
}
//从队列中获取packet
//当first_pkt为空时,将MyAVPacketList尾指针last_pkt赋值为空
//若是first_pkt中没有值,线程等待,等待packet_queue_put_private中给first_pkt赋值后SDL_CondSignal(q->cond)进行激活
static int packet_queue_get(PacketQueue *q, AVPacket *pkt, int block, int *serial)
{
    MyAVPacketList *pkt1;
    int ret;

    SDL_LockMutex(q->mutex); //加锁

    //小Tips:采用 for (;;)因为在编译阶段汇编器会把 for (;;)优化掉,直接进入循环
    for (;;) {
        if (q->abort_request) { //队列是否放弃
            ret = -1;
            break;
        }

        pkt1 = q->first_pkt; //将链表first_pkt取出
        if (pkt1) {
            q->first_pkt = pkt1->next;//移动链表指针
            if (!q->first_pkt)
            //当first_pkt为空时,将标识last_pkt赋值为空
                q->last_pkt = NULL;
            q->nb_packets--;//队列中的packet减一
            q->size -= pkt1->pkt.size + sizeof(*pkt1);//减去packet数量
            q->duration -= pkt1->pkt.duration;//减去总时间
            *pkt = pkt1->pkt;//packet赋值
            if (serial)
            //将取出的packet的序列号赋值
                *serial = pkt1->serial;
            av_free(pkt1);
            ret = 1;
            break;
        } else if (!block) {
            ret = 0;
            break;
        } else {
 //若是first_pkt中没有值,线程等待,等待packet_queue_put_private中给first_pkt赋值后SDL_CondSignal(q->cond)进行激活
            SDL_CondWait(q->cond, q->mutex);
        }
    }
    SDL_UnlockMutex(q->mutex);
    return ret;
}

av_read_frame接口在ffplay经常用,实际上也只是调用了read_frame_internal

int av_read_frame(AVFormatContext *s, AVPacket *pkt)
{
    const int genpts = s->flags & AVFMT_FLAG_GENPTS;
    int eof = 0;
    int ret;
    AVStream *st;

    if (!genpts) {
        ret = s->internal->packet_buffer
              ? avpriv_packet_list_get(&s->internal->packet_buffer,
                                        &s->internal->packet_buffer_end, pkt)
              : read_frame_internal(s, pkt);
        if (ret < 0)
            return ret;
         //跳转语句直接跳到return_packet
        goto return_packet;
    }

    ....

        ret = avpriv_packet_list_put(&s->internal->packet_buffer,
                                 &s->internal->packet_buffer_end,
                                 pkt, NULL, 0);
        if (ret < 0) {
            av_packet_unref(pkt);
            return ret;
        }
    }

return_packet:

    st = s->streams[pkt->stream_index];
    if ((s->iformat->flags & AVFMT_GENERIC_INDEX) && pkt->flags & AV_PKT_FLAG_KEY) {
        ff_reduce_index(s, st->index);
        av_add_index_entry(st, pkt->pos, pkt->dts, 0, 0, AVINDEX_KEYFRAME);
    }

    if (is_relative(pkt->dts))
        pkt->dts -= RELATIVE_TS_BASE;
    if (is_relative(pkt->pts))
        pkt->pts -= RELATIVE_TS_BASE;

    return ret;
}
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值