目录
视频播放器基本原理
解协议
将流媒体协议的数据,解析为标准的相应的封装格式数据。视音频在网络上传播的时候,常常采用各种流媒体协议,例如 HTTP,RTMP,或是 MMS 等等。这些协议在传输视音频数据的同时,也会传输一些信令数据。这些信令数据包括对播放的控制(播放,暂停,停止),或者对网络状态的描述等。解协议的过程中会去除掉信令数据而只保留视音频数据。例如,采用 RTMP 协议传输的数据,经过解协议操作后,输出 FLV 格式的数据。解封装
将输入的封装格式的数据,分离成为音频流压缩编码数据和视频流压缩编码数据。封装格式种类很多,例如 MP4,MKV,RMVB,TS,FLV,AVI 等等,它的作用就是将已经压缩编码的视频数据和音频数据按照一定的格式放到一起。例如,FLV 格式的数据,经过解封装操作后,输出 H.264 编码的视频码流和 AAC 编码的音频码流。解码
将视频/音频压缩编码数据,解码成为非压缩的视频/音频原始数据。音频的压缩编码标准包含 AAC,MP3,AC-3 等等,视频的压缩编码标准则包含 H.264,MPEG2,VC-1 等等。解码是整个系统中最重要也是最复杂的一个环节。通过解码,压缩编码的视频数据输出成为非压缩的颜色数据,例如 YUV420P,RGB 等等;压缩编码的音频数据输出成为非压缩的音频抽样数据,例如 PCM 数据。音视频同步
根据解封装模块处理过程中获取到的参数信息,同步解码出来的视频和音频数据,并将视频音频数据送至系统的显卡和声卡播放出来。
简易播放器的实现——音视频播放
音视频同步的目的是为了使播放的声音和显示的画面保持一致。视频按帧播放,图像显示设备每次显示一帧画面,视频播放速度由帧率确定,帧率指示每秒显示多少帧;音频按采样点播放,声音播放设备每次播放一个采样点,声音播放速度由采样率确定,采样率指示每秒播放多少个采样点。如果仅仅是视频按帧率播放,音频按采样率播放,二者没有同步机制,即使最初音视频是基本同步的,随着时间的流逝,音视频会逐渐失去同步,并且不同步的现象会越来越严重。这是因为:一、播放时间难以精确控制,二、异常及误差会随时间累积。所以,必须要采用一定的同步策略,不断对音视频的时间差作校正,使图像显示与声音播放总体保持一致。
我们以一个44.1KHz的AAC音频流和25FPS的H264视频流为例,来看一下理想情况下音视频的同步过程:
一个AAC音频frame每个声道包含1024个采样点(也可能是2048,参“FFmpeg关于nb_smples,frame_size以及profile的解释”),则一个frame的播放时长(duration)为:(1024/44100)×1000ms = 23.22ms;一个H264视频frame播放时长(duration)为:1000ms/25 = 40ms。声卡虽然是以音频采样点为播放单位,但通常我们每次往声卡缓冲区送一个音频frame,每送一个音频frame更新一下音频的播放时刻,即每隔一个音频frame时长更新一下音频时钟,实际上ffplay就是这么做的。我们暂且把一个音频时钟更新点记作其播放点,理想情况下,音视频完全同步,音视频播放过程如下图所示:
音视频同步的方式基本是确定一个时钟(音频时钟、视频时钟、外部时钟)作为主时钟,非主时钟的音频或视频时钟为从时钟。在播放过程中,主时钟作为同步基准,不断判断从时钟与主时钟的差异,调节从时钟,使从时钟追赶(落后时)或等待(超前时)主时钟。按照主时钟的不同种类,可以将音视频同步模式分为如下三种:
音频同步到视频,视频时钟作为主时钟。
视频同步到音频,音频时钟作为主时钟。
音视频同步到外部时钟,外部时钟作为主时钟。
程序结构
作者基于eclipse来创建C++工程 ,目录结构如下:
程序源码
audio.cpp
#include "player.h"
#include "packet.h"
#include "frame.h"
static void sdl_audio_callback(void *opaque, Uint8 *stream, int len);
// 从packet_queue中取一个packet,解码生成frame
static int audio_decode_frame(AVCodecContext *p_codec_ctx, packet_queue_t *p_pkt_queue, AVFrame *frame)
{
int ret;
while (1)
{
AVPacket pkt;
while (1)
{
//if (d->queue->abort_request)
// return -1;
// 3.2 一个音频packet含一至多个音频frame,每次avcodec_receive_frame()返回一个frame,此函数返回。
// 下次进来此函数,继续获取一个frame,直到avcodec_receive_frame()返回AVERROR(EAGAIN),
// 表示解码器需要填入新的音频packet
ret = avcodec_receive_frame(p_codec_ctx, frame);
if (ret >= 0)
{
// 时基转换,从d->avctx->pkt_timebase时基转换到1/frame->sample_rate时基
AVRational tb = (AVRational) { 1, frame->sample_rate };
if (frame->pts != AV_NOPTS_VALUE)
{
frame->pts = av_rescale_q(frame->pts, p_codec_ctx->pkt_timebase, tb);
}
else
{
av_log(NULL, AV_LOG_WARNING, "frame->pts no\n");
}
return 1;
}
else if (ret == AVERROR_EOF)
{
av_log(NULL, AV_LOG_INFO, "audio avcodec_receive_frame(): the decoder has been flushed\n");
avcodec_flush_buffers(p_codec_ctx);
return 0;
}
else if (ret == AVERROR(EAGAIN))
{
// av_log(NULL, AV_LOG_INFO, "audio avcodec_receive_frame(): input is not accepted in the current state\n");
break;
}
else
{
av_log(NULL, AV_LOG_ERROR, "audio avcodec_receive_frame(): other errors\n");
continue;
}
}
// 1. 取出一个packet。使用pkt对应的serial赋值给d->pkt_serial
if (packet_queue_get(p_pkt_queue, &pkt, true) < 0)
{
return -1;
}
// packet_queue中第一个总是flush_pkt。每次seek操作会插入flush_pkt,更新serial,开启新的播放序列
if (pkt.data == NULL)
{
// 复位解码器内部状态/刷新内部缓冲区。当seek操作或切换流时应调用此函数。
avcodec_flush_buffers(p_codec_ctx);
}
else
{
// 2. 将packet发送给解码器
// 发送packet的顺序是按dts递增的顺序,如IPBBPBB
// pkt.pos变量可以标识当前packet在视频文件中的地址偏移
if (avcodec_send_packet(p_codec_ctx, &pkt) == AVERROR(EAGAIN))
{
av_log(NULL, AV_LOG_ERROR, "receive_frame and send_packet both returned EAGAIN, which is an API violation.\n");
}
av_packet_unref(&pkt);
}
}
}
// 音频解码线程:从音频packet_queue中取数据,解码后放入音频frame_queue
static int audio_decode_thread(void *arg)
{
player_stat_t *is = (player_stat_t *)arg;
AVFrame *p_frame = av_frame_alloc();
frame_t *af;
int got_frame = 0;
AVRational tb;
int ret = 0;
if (p_frame == NULL)
{
return AVERROR(ENOMEM);
}
while (1)
{
got_frame = audio_decode_frame(is->p_acodec_ctx, &is->audio_pkt_queue, p_frame);
if (got_frame < 0)
{
goto the_end;
}
if (got_frame)
{
tb = (AVRational) { 1, p_frame->sample_rate };
if (!(af = frame_queue_peek_writable(&is->audio_frm_queue)))
goto the_end;
af->pts = (p_frame->pts == AV_NOPTS_VALUE) ? NAN : p_frame->pts * av_q2d(tb);
af->pos = p_frame->pkt_pos;
//-af->serial = is->auddec.pkt_serial;
// 当前帧包含的(单个声道)采样数/采样率就是当前帧的播放时长
af->duration = av_q2d((AVRational) { p_frame->nb_samples, p_frame->sample_rate });
// 将frame数据拷入af->frame,af->frame指向音频frame队列尾部
av_frame_move_ref(af->frame, p_frame);
// 更新音频frame队列大小及写指针
frame_queue_push(&is->audio_frm_queue);
}
}
the_end:
av_frame_free(&p_frame);
return ret;
}
int open_audio_stream(player_stat_t *is)
{
AVCodecContext *p_codec_ctx;
AVCodecParameters *p_codec_par = NULL;
AVCodec* p_codec = NULL;
int ret;
// 1. 为音频流构建解码器AVCodecContext
// 1.1 获取解码器参数AVCodecParameters
p_codec_par = is->p_audio_stream->codecpar;
// 1.2 获取解码器
p_codec = avcodec_find_decoder(p_codec_par->codec_id);
if (p_codec == NULL)
{
av_log(NULL, AV_LOG_ERROR, "Cann't find codec!\n");
return -1;
}
// 1.3 构建解码器AVCodecContext
// 1.3.1 p_codec_ctx初始化:分配结构体,使用p_codec初始化相应成员为默认值
p_codec_ctx = avcodec_alloc_context3(p_codec);
if (p_codec_ctx == NULL)
{
av_log(NULL, AV_LOG_ERROR, "avcodec_alloc_context3() failed\n");
return -1;
}
// 1.3.2 p_codec_ctx初始化:p_codec_par ==> p_codec_ctx,初始化相应成员
ret = avcodec_parameters_to_context(p_codec_ctx, p_codec_par);
if (ret < 0)
{
av_log(NULL, AV_LOG_ERROR, "avcodec_parameters_to_context() failed %d\n", ret);
return -1;
}
// 1.3.3 p_codec_ctx初始化:使用p_codec初始化p_codec_ctx,初始化完成
ret = avcodec_open2(p_codec_ctx, p_codec, NULL);
if (ret < 0)
{
av_log(NULL, AV_LOG_ERROR, "avcodec_open2() failed %d\n", ret);
return -1;
}
p_codec_ctx->pkt_timebase = is->p_audio_stream->time_base;
is->p_acodec_ctx = p_codec_ctx;
// 2. 创建视频解码线程
SDL_CreateThread(audio_decode_thread, "audio decode thread", is);
return 0;
}
static int audio_resample(player_stat_t *is, int64_t audio_callback_time)
{
int data_size, resampled_data_size;
int64_t dec_channel_layout;
av_unused double audio_clock0;
int wanted_nb_samples;
frame_t *af;
if (is->paused)
{
return -1;
}
#if defined(_WIN32)
while (frame_queue_nb_remaining(&is->audio_frm_queue) == 0)
{
if ((av_gettime_relative() - audio_callback_time) > 1000000LL * is->audio_hw_buf_size / is->audio_param_tgt.bytes_per_sec / 2)
return -1;
av_usleep(1000);
}
#endif
// 若队列头部可读,则由af指向可读帧
if (!(af = frame_queue_peek_readable(&is->audio_frm_queue)))
return -1;
frame_queue_next(&is->audio_frm_queue);
// 根据frame中指定的音频参数获取缓冲区的大小
data_size = av_samples_get_buffer_size(NULL, af->frame->channels, // 本行两参数:linesize,声道数
af->frame->nb_samples, // 本行一参数:本帧中包含的单个声道中的样本数
(AVSampleFormat) af->frame->format, 1); // 本行两参数:采样格式,不对齐
// 获取声道布局
dec_channel_layout =
(af->frame->channel_layout && af->frame->channels == av_get_channel_layout_nb_channels(af->frame->channel_layout)) ?
af->frame->channel_layout : av_get_default_channel_layout(af->frame->channels);
wanted_nb_samples = af->frame->nb_samples;
// is->audio_param_tgt是SDL可接受的音频帧数,是audio_open()中取得的参数
// 在audio_open()函数中又有“is->audio_src = is->audio_param_tgt”
// 此处表示:如果frame中的音频参数 == is->audio_src == is->audio_param_tgt,那音频重采样的过程就免了(因此时is->swr_ctr是NULL)
// 否则使用frame(源)和is->audio_param_tgt(目标)中的音频参数来设置is->swr_ctx,并使用frame中的音频参数来赋值is->audio_src
if (af->frame->format != is->audio_param_src.fmt ||
dec_channel_layout != is->audio_param_src.channel_layout ||
af->frame->sample_rate != is->audio_param_src.freq)
{
swr_free(&is->audio_swr_ctx);
// 使用frame(源)和is->audio_param_tgt(目标)中的音频参数来设置is->audio_swr_ctx
is->audio_swr_ctx = swr_alloc_set_opts(NULL,
is->audio_param_tgt.channel_layout, is->audio_param_tgt.fmt, is->audio_param_tgt.freq,
dec_channel_layout, (AVSampleFormat) af->frame->format,
af->frame->sample_rate,
0, NULL);
if (!is->audio_swr_ctx || swr_init(is->audio_swr_ctx) < 0)
{
av_log(NULL, AV_LOG_ERROR,
"Cannot create sample rate converter for conversion of %d Hz %s %d channels to %d Hz %s %d channels!\n",
af->frame->sample_rate, av_get_sample_fmt_name((AVSampleFormat) af->frame->format),
af->frame->channels,
is->audio_param_tgt.freq, av_get_sample_fmt_name(is->audio_param_tgt.fmt), is->audio_param_tgt.channels);
swr_free(&is->audio_swr_ctx);
return -1;
}
// 使用frame中的参数更新is->audio_src,第一次更新后后面基本不用执行此if分支了,因为一个音频流中各frame通用参数一样
is->audio_param_src.channel_layout = dec_channel_layout;
is->audio_param_src.channels = af->frame->channels;
is->audio_param_src.freq = af->frame->sample_rate;
is->audio_param_src.fmt = (AVSampleFormat) af->frame->format;
}
if (is->audio_swr_ctx)
{
// 重采样输入参数1:输入音频样本数是af->frame->nb_samples
// 重采样输入参数2:输入音频缓冲区
const uint8_t **in = (const uint8_t **)af->frame->extended_data;
// 重采样输出参数1:输出音频缓冲区尺寸
// 重采样输出参数2:输出音频缓冲区
uint8_t **out = &is->audio_frm_rwr;
// 重采样输出参数:输出音频样本数(多加了256个样本)
int out_count = (int64_t)wanted_nb_samples * is->audio_param_tgt.freq / af->frame->sample_rate + 256;
// 重采样输出参数:输出音频缓冲区尺寸(以字节为单位)
int out_size = av_samples_get_buffer_size(NULL, is->audio_param_tgt.channels, out_count, is->audio_param_tgt.fmt, 0);
int len2;
if (out_size < 0)
{
av_log(NULL, AV_LOG_ERROR, "av_samples_get_buffer_size() failed\n");
return -1;
}
av_fast_malloc(&is->audio_frm_rwr, &is->audio_frm_rwr_size, out_size);
if (!is->audio_frm_rwr)
return AVERROR(ENOMEM);
// 音频重采样:返回值是重采样后得到的音频数据中单个声道的样本数
len2 = swr_convert(is->audio_swr_ctx, out, out_count, in, af->frame->nb_samples);
if (len2 < 0)
{
av_log(NULL, AV_LOG_ERROR, "swr_convert() failed\n");
return -1;
}
if (len2 == out_count)
{
av_log(NULL, AV_LOG_WARNING, "audio buffer is probably too small\n");
if (swr_init(is->audio_swr_ctx) < 0)
swr_free(&is->audio_swr_ctx);
}
is->p_audio_frm = is->audio_frm_rwr;
// 重采样返回的一帧音频数据大小(以字节为单位)
resampled_data_size = len2 * is->audio_param_tgt.channels * av_get_bytes_per_sample(is->audio_param_tgt.fmt);
// printf("%s:%d, resampled_data_size = %d\n", __FUNCTION__, __LINE__, resampled_data_size);
}
else
{
// 未经重采样,则将指针指向frame中的音频数据
is->p_audio_frm = af->frame->data[0];
resampled_data_size = data_size;
}
audio_clock0 = is->audio_clock;
/* update the audio clock with the pts */
if (!isnan(af->pts))
{
is->audio_clock = af->pts + (double)af->frame->nb_samples / af->frame->sample_rate;
}
else
{
is->audio_clock = NAN;
}
is->audio_clock_serial = af->serial;
#ifdef DEBUG
{
static double last_clock;
printf("audio: delay=%0.3f clock=%0.3f clock0=%0.3f\n",
is->audio_clock - last_clock,
is->audio_clock, audio_clock0);
last_clock = is->audio_clock;
}
#endif
return resampled_data_size;
}
static int open_audio_playing(void *arg)
{
player_stat_t *is = (player_stat_t *)arg;
SDL_AudioSpec wanted_spec;
SDL_AudioSpec actual_spec;
static SDL_AudioDeviceID audio_dev;
// 2. 打开音频设备并创建音频处理线程
// 2.1 打开音频设备,获取SDL设备支持的音频参数actual_spec(期望的参数是wanted_spec,实际得到actual_spec)
// 1) SDL提供两种使音频设备取得音频数据方法:
// a. push,SDL以特定的频率调用回调函数,在回调函数中取得音频数据
// b. pull,用户程序以特定的频率调用SDL_QueueAudio(),向音频设备提供数据。此种情况wanted_spec.callback=NULL
// 2) 音频设备打开后播放静音,不启动回调,调用SDL_PauseAudio(0)后启动回调,开始正常播放音频
wanted_spec.freq = is->p_acodec_ctx->sample_rate; // 采样率
wanted_spec.format = AUDIO_S16SYS; // S表带符号,16是采样深度,SYS表采用系统字节序
wanted_spec.channels = is->p_acodec_ctx->channels; // 声音通道数
wanted_spec.silence = 0; // 静音值
// wanted_spec.samples = SDL_AUDIO_BUFFER_SIZE; // SDL声音缓冲区尺寸,单位是单声道采样点尺寸x通道数
// SDL声音缓冲区尺寸,单位是单声道采样点尺寸x声道数
wanted_spec.samples = FFMAX(SDL_AUDIO_MIN_BUFFER_SIZE, 2 << av_log2(wanted_spec.freq / SDL_AUDIO_MAX_CALLBACKS_PER_SEC));
wanted_spec.callback = sdl_audio_callback; // 回调函数,若为NULL,则应使用SDL_QueueAudio()机制
wanted_spec.userdata = is; // 提供给回调函数的参数
if (!(audio_dev = SDL_OpenAudioDevice(NULL, 0, &wanted_spec, &actual_spec, SDL_AUDIO_ALLOW_FREQUENCY_CHANGE | SDL_AUDIO_ALLOW_CHANNELS_CHANGE)))
{
av_log(NULL, AV_LOG_ERROR, "SDL_OpenAudio() failed: %s\n", SDL_GetError());
return -1;
}
// 2.2 根据SDL音频参数构建音频重采样参数
// wanted_spec是期望的参数,actual_spec是实际的参数,wanted_spec和auctual_spec都是SDL中的参数。
// 此处audio_param是FFmpeg中的参数,此参数应保证是SDL播放支持的参数,后面重采样要用到此参数
// 音频帧解码后得到的frame中的音频格式未必被SDL支持,比如frame可能是planar格式,但SDL2.0并不支持planar格式,
// 若将解码后的frame直接送入SDL音频缓冲区,声音将无法正常播放。所以需要先将frame重采样(转换格式)为SDL支持的模式,
// 然后送再写入SDL音频缓冲区
is->audio_param_tgt.fmt = AV_SAMPLE_FMT_S16;
is->audio_param_tgt.freq = actual_spec.freq;
is->audio_param_tgt.channel_layout = av_get_default_channel_layout(actual_spec.channels);;
is->audio_param_tgt.channels = actual_spec.channels;
is->audio_param_tgt.frame_size = av_samples_get_buffer_size(NULL, actual_spec.channels, 1, is->audio_param_tgt.fmt, 1);
is->audio_param_tgt.bytes_per_sec = av_samples_get_buffer_size(NULL, actual_spec.channels, actual_spec.freq, is->audio_param_tgt.fmt, 1);
if (is->audio_param_tgt.bytes_per_sec <= 0 || is->audio_param_tgt.frame_size <= 0)
{
av_log(NULL, AV_LOG_ERROR, "av_samples_get_buffer_size failed\n");
return -1;
}
is->audio_param_src = is->audio_param_tgt;
is->audio_hw_buf_size = actual_spec.size; // SDL音频缓冲区大小
is->audio_frm_size = 0;
is->audio_cp_index = 0;
// 3. 暂停/继续音频回调处理。参数1表暂停,0表继续。
// 打开音频设备后默认未启动回调处理,通过调用SDL_PauseAudio(0)来启动回调处理。
// 这样就可以在打开音频设备后先为回调函数安全初始化数据,一切就绪后再启动音频回调。
// 在暂停期间,会将静音值往音频设备写。
SDL_PauseAudioDevice(audio_dev, 0);
return 0;
}
// 音频处理回调函数。读队列获取音频包,解码,播放
// 此函数被SDL按需调用,此函数不在用户主线程中,因此数据需要保护
// \param[in] opaque 用户在注册回调函数时指定的参数
// \param[out] stream 音频数据缓冲区地址,将解码后的音频数据填入此缓冲区
// \param[out] len 音频数据缓冲区大小,单位字节
// 回调函数返回后,stream指向的音频缓冲区将变为无效
// 双声道采样点的顺序为LRLRLR
static void sdl_audio_callback(void *opaque, Uint8 *stream, int len)
{
player_stat_t *is = (player_stat_t *)opaque;
int audio_size, len1;
int64_t audio_callback_time = av_gettime_relative();
while (len > 0) // 输入参数len等于is->audio_hw_buf_size,是audio_open()中申请到的SDL音频缓冲区大小
{
if (is->audio_cp_index >= (int)is->audio_frm_size)
{
// 1. 从音频frame队列中取出一个frame,转换为音频设备支持的格式,返回值是重采样音频帧的大小
audio_size = audio_resample(is, audio_callback_time);
if (audio_size < 0)
{
/* if error, just output silence */
is->p_audio_frm = NULL;
is->audio_frm_size = SDL_AUDIO_MIN_BUFFER_SIZE / is->audio_param_tgt.frame_size * is->audio_param_tgt.frame_size;
}
else
{
is->audio_frm_size = audio_size;
}
is->audio_cp_index = 0;
}
// 引入is->audio_cp_index的作用:防止一帧音频数据大小超过SDL音频缓冲区大小,这样一帧数据需要经过多次拷贝
// 用is->audio_cp_index标识重采样帧中已拷入SDL音频缓冲区的数据位置索引,len1表示本次拷贝的数据量
len1 = is->audio_frm_size - is->audio_cp_index;
if (len1 > len)
{
len1 = len;
}
// 2. 将转换后的音频数据拷贝到音频缓冲区stream中,之后的播放就是音频设备驱动程序的工作了
if (is->p_audio_frm != NULL)
{
memcpy(stream, (uint8_t *)is->p_audio_frm + is->audio_cp_index, len1);
}
else
{
memset(stream, 0, len1);
}
len -= len1;
stream += len1;
is->audio_cp_index += len1;
}
// is->audio_write_buf_size是本帧中尚未拷入SDL音频缓冲区的数据量
is->audio_write_buf_size = is->audio_frm_size - is->audio_cp_index;
/* Let's assume the audio driver that is used by SDL has two periods. */
// 3. 更新时钟
if (!isnan(is->audio_clock))
{
// 更新音频时钟,更新时刻:每次往声卡缓冲区拷入数据后
// 前面audio_decode_frame中更新的is->audio_clock是以音频帧为单位,所以此处第二个参数要减去未拷贝数据量占用的时间
set_clock_at(&is->audio_clk,
is->audio_clock - (double)(2 * is->audio_hw_buf_size + is->audio_write_buf_size) / is->audio_param_tgt.bytes_per_sec,
is->audio_clock_serial,
audio_callback_time / 1000000.0);
}
}
int open_audio(player_stat_t *is)
{
open_audio_stream(is);
open_audio_playing(is);
return 0;
}
audio.h
#ifndef __AUDIO_H__
#define __AUDIO_H__
#include "player.h"
int open_audio(player_stat_t *is);
#endif
demux.cpp
#include "demux.h"
#include "packet.h"
static int decode_interrupt_cb(void *ctx)
{
player_stat_t *is = (player_stat_t*) ctx;
return is->abort_request;
}
static int demux_init(player_stat_t *is)
{
AVFormatContext *p_fmt_ctx = NULL;
int err, i, ret;
int a_idx;
int v_idx;
p_fmt_ctx = avformat_alloc_context();
if (!p_fmt_ctx)
{
printf("Could not allocate context.\n");
ret = AVERROR(ENOMEM);
goto fail;
}
// 中断回调机制。为底层I/O层提供一个处理接口,比如中止IO操作。
p_fmt_ctx->interrupt_callback.callback = decode_interrupt_cb;
p_fmt_ctx->interrupt_callback.opaque = is;
// 1. 构建AVFormatContext
// 1.1 打开视频文件:读取文件头,将文件格式信息存储在"fmt context"中
err = avformat_open_input(&p_fmt_ctx, is->filename, NULL, NULL);
if (err < 0)
{
printf("avformat_open_input() failed %d\n", err);
ret = -1;
goto fail;
}
is->p_fmt_ctx = p_fmt_ctx;
// 1.2 搜索流信息:读取一段视频文件数据,尝试解码,将取到的流信息填入p_fmt_ctx->streams
// ic->streams是一个指针数组,数组大小是pFormatCtx->nb_streams
err = avformat_find_stream_info(p_fmt_ctx, NULL);
if (err < 0)
{
printf("avformat_find_stream_info() failed %d\n", err);
ret = -1;
goto fail;
}
// 2. 查找第一个音频流/视频流
a_idx = -1;
v_idx = -1;
for (i=0; i<(int)p_fmt_ctx->nb_streams; i++)
{
if ((p_fmt_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) &&
(a_idx == -1))
{
a_idx = i;
printf("Find a audio stream, index %d\n", a_idx);
}
if ((p_fmt_ctx->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) &&
(v_idx == -1))
{
v_idx = i;
printf("Find a video stream, index %d\n", v_idx);
}
if (a_idx != -1 && v_idx != -1)
{
break;
}
}
if (a_idx == -1 && v_idx == -1)
{
printf("Cann't find any audio/video stream\n");
ret = -1;
fail:
if (p_fmt_ctx != NULL)
{
avformat_close_input(&p_fmt_ctx);
}
return ret;
}
is->audio_idx = a_idx;
is->video_idx = v_idx;
is->p_audio_stream = p_fmt_ctx->streams[a_idx];
is->p_video_stream = p_fmt_ctx->streams[v_idx];
return 0;
}
int demux_deinit()
{
return 0;
}
static int stream_has_enough_packets(AVStream *st, int stream_id, packet_queue_t *queue)
{
return stream_id < 0 ||
queue->abort_request ||
(st->disposition & AV_DISPOSITION_ATTACHED_PIC) ||
queue->nb_packets > MIN_FRAMES && (!queue->duration || av_q2d(st->time_base) * queue->duration > 1.0);
}
/* this thread gets the stream from the disk or the network */
static int demux_thread(void *arg)
{
player_stat_t *is = (player_stat_t *)arg;
AVFormatContext *p_fmt_ctx = is->p_fmt_ctx;
int ret;
AVPacket pkt1, *pkt = &pkt1;
SDL_mutex *wait_mutex = SDL_CreateMutex();
printf("demux_thread running...\n");
// 4. 解复用处理
while (1)
{
if (is->abort_request)
{
break;
}
/* if the queue are full, no need to read more */
if (is->audio_pkt_queue.size + is->video_pkt_queue.size > MAX_QUEUE_SIZE ||
(stream_has_enough_packets(is->p_audio_stream, is->audio_idx, &is->audio_pkt_queue) &&
stream_has_enough_packets(is->p_video_stream, is->video_idx, &is->video_pkt_queue)))
{
/* wait 10 ms */
SDL_LockMutex(wait_mutex);
SDL_CondWaitTimeout(is->continue_read_thread, wait_mutex, 10);
SDL_UnlockMutex(wait_mutex);
continue;
}
// 4.1 从输入文件中读取一个packet
ret = av_read_frame(is->p_fmt_ctx, pkt);
if (ret < 0)
{
if ((ret == AVERROR_EOF))// || avio_feof(ic->pb)) && !is->eof)
{
// 输入文件已读完,则往packet队列中发送NULL packet,以冲洗(flush)解码器,否则解码器中缓存的帧取不出来
if (is->video_idx >= 0)
{
packet_queue_put_nullpacket(&is->video_pkt_queue, is->video_idx);
}
if (is->audio_idx >= 0)
{
packet_queue_put_nullpacket(&is->audio_pkt_queue, is->audio_idx);
}
}
SDL_LockMutex(wait_mutex);
SDL_CondWaitTimeout(is->continue_read_thread, wait_mutex, 10);
SDL_UnlockMutex(wait_mutex);
continue;
}
// 4.3 根据当前packet类型(音频、视频、字幕),将其存入对应的packet队列
if (pkt->stream_index == is->audio_idx)
{
packet_queue_put(&is->audio_pkt_queue, pkt);
}
else if (pkt->stream_index == is->video_idx)
{
packet_queue_put(&is->video_pkt_queue, pkt);
}
else
{
av_packet_unref(pkt);
}
}
ret = 0;
if (ret != 0)
{
SDL_Event event;
event.type = FF_QUIT_EVENT;
event.user.data1 = is;
SDL_PushEvent(&event);
}
SDL_DestroyMutex(wait_mutex);
return 0;
}
int open_demux(player_stat_t *is)
{
if (demux_init(is) != 0)
{
printf("demux_init() failed\n");
return -1;
}
is->read_tid = SDL_CreateThread(demux_thread, "demux_thread", is);
if (is->read_tid == NULL)
{
printf("SDL_CreateThread() failed: %s\n", SDL_GetError());
return -1;
}
return 0;
}
demux.h
#ifndef __DEMUX_H__
#define __DEMUX_H__
#include "player.h"
int open_demux(player_stat_t *is);
#endif
frame.cpp
#include "frame.h"
#include "player.h"
void frame_queue_unref_item(frame_t *vp)
{
av_frame_unref(vp->frame);
}
int frame_queue_init(frame_queue_t *f, packet_queue_t *pktq, int max_size, int keep_last)
{
int i;
memset(f, 0, sizeof(frame_queue_t));
if (!(f->mutex = SDL_CreateMutex())) {
av_log(NULL, AV_LOG_FATAL, "SDL_CreateMutex(): %s\n", SDL_GetError());
return AVERROR(ENOMEM);
}
if (!(f->cond = SDL_CreateCond())) {
av_log(NULL, AV_LOG_FATAL, "SDL_CreateCond(): %s\n", SDL_GetError());
return AVERROR(ENOMEM);
}
f->pktq = pktq;
f->max_size = FFMIN(max_size, FRAME_QUEUE_SIZE);
f->keep_last = !!keep_last;
for (i = 0; i < f->max_size; i++)
if (!(f->queue[i].frame = av_frame_alloc()))
return AVERROR(ENOMEM);
return 0;
}
void frame_queue_destory(frame_queue_t *f)
{
int i;
for (i = 0; i < f->max_size; i++) {
frame_t *vp = &f->queue[i];
frame_queue_unref_item(vp);
av_frame_free(&vp->frame);
}
SDL_DestroyMutex(f->mutex);
SDL_DestroyCond(f->cond);
}
void frame_queue_signal(frame_queue_t *f)
{
SDL_LockMutex(f->mutex);
SDL_CondSignal(f->cond);
SDL_UnlockMutex(f->mutex);
}
frame_t *frame_queue_peek(frame_queue_t *f)
{
return &f->queue[(f->rindex + f->rindex_shown) % f->max_size];
}
frame_t *frame_queue_peek_next(frame_queue_t *f)
{
return &f->queue[(f->rindex + f->rindex_shown + 1) % f->max_size];
}
// 取出此帧进行播放,只读取不删除,不删除是因为此帧需要缓存下来供下一次使用。播放后,此帧变为上一帧
frame_t *frame_queue_peek_last(frame_queue_t *f)
{
return &f->queue[f->rindex];
}
// 向队列尾部申请一个可写的帧空间,若无空间可写,则等待
frame_t *frame_queue_peek_writable(frame_queue_t *f)
{
/* wait until we have space to put a new frame */
SDL_LockMutex(f->mutex);
while (f->size >= f->max_size &&
!f->pktq->abort_request) {
SDL_CondWait(f->cond, f->mutex);
}
SDL_UnlockMutex(f->mutex);
if (f->pktq->abort_request)
return NULL;
return &f->queue[f->windex];
}
// 从队列头部读取一帧,只读取不删除,若无帧可读则等待
frame_t *frame_queue_peek_readable(frame_queue_t *f)
{
/* wait until we have a readable a new frame */
SDL_LockMutex(f->mutex);
while (f->size - f->rindex_shown <= 0 &&
!f->pktq->abort_request) {
SDL_CondWait(f->cond, f->mutex);
}
SDL_UnlockMutex(f->mutex);
if (f->pktq->abort_request)
return NULL;
return &f->queue[(f->rindex + f->rindex_shown) % f->max_size];
}
// 向队列尾部压入一帧,只更新计数与写指针,因此调用此函数前应将帧数据写入队列相应位置
void frame_queue_push(frame_queue_t *f)
{
if (++f->windex == f->max_size)
f->windex = 0;
SDL_LockMutex(f->mutex);
f->size++;
SDL_CondSignal(f->cond);
SDL_UnlockMutex(f->mutex);
}
// 读指针(rindex)指向的帧已显示,删除此帧,注意不读取直接删除。读指针加1
void frame_queue_next(frame_queue_t *f)
{
if (f->keep_last && !f->rindex_shown) {
f->rindex_shown = 1;
return;
}
frame_queue_unref_item(&f->queue[f->rindex]);
if (++f->rindex == f->max_size)
f->rindex = 0;
SDL_LockMutex(f->mutex);
f->size--;
SDL_CondSignal(f->cond);
SDL_UnlockMutex(f->mutex);
}
// frame_queue中未显示的帧数
/* return the number of undisplayed frames in the queue */
int frame_queue_nb_remaining(frame_queue_t *f)
{
return f->size - f->rindex_shown;
}
/* return last shown position */
int64_t frame_queue_last_pos(frame_queue_t *f)
{
frame_t *fp = &f->queue[f->rindex];
if (f->rindex_shown && fp->serial == f->pktq->serial)
return fp->pos;
else
return -1;
}
frame.h
#ifndef __FRAME_H__
#define __FRAME_H__
#include "player.h"
void frame_queue_unref_item(frame_t *vp);
int frame_queue_init(frame_queue_t *f, packet_queue_t *pktq, int max_size, int keep_last);
void frame_queue_destory(frame_queue_t *f);
void frame_queue_signal(frame_queue_t *f);
frame_t *frame_queue_peek(frame_queue_t *f);
frame_t *frame_queue_peek_next(frame_queue_t *f);
frame_t *frame_queue_peek_last(frame_queue_t *f);
frame_t *frame_queue_peek_writable(frame_queue_t *f);
frame_t *frame_queue_peek_readable(frame_queue_t *f);
void frame_queue_push(frame_queue_t *f);
void frame_queue_next(frame_queue_t *f);
int frame_queue_nb_remaining(frame_queue_t *f);
int64_t frame_queue_last_pos(frame_queue_t *f);
#endif
main.cpp
#include <stdio.h>
#include "player.h"
int main(int argc, char *argv[])
{
const char *filename = "input.mp4";
printf("Try playing %s ...\n", filename);
player_running(filename);
return 0;
}
packet.cpp
#include "packet.h"
int packet_queue_init(packet_queue_t *q)
{
memset(q, 0, sizeof(packet_queue_t));
q->mutex = SDL_CreateMutex();
if (!q->mutex)
{
printf("SDL_CreateMutex(): %s\n", SDL_GetError());
return AVERROR(ENOMEM);
}
q->cond = SDL_CreateCond();
if (!q->cond)
{
printf("SDL_CreateCond(): %s\n", SDL_GetError());
return AVERROR(ENOMEM);
}
q->abort_request = 0;
return 0;
}
// 写队列尾部。pkt是一包还未解码的音频数据
int packet_queue_put(packet_queue_t *q, AVPacket *pkt)
{
AVPacketList *pkt_list;
if (av_packet_make_refcounted(pkt) < 0)
{
printf("[pkt] is not refrence counted\n");
return -1;
}
pkt_list = (AVPacketList*) av_malloc(sizeof(AVPacketList));
if (!pkt_list)
{
return -1;
}
pkt_list->pkt = *pkt;
pkt_list->next = NULL;
SDL_LockMutex(q->mutex);
if (!q->last_pkt) // 队列为空
{
q->first_pkt = pkt_list;
}
else
{
q->last_pkt->next = pkt_list;
}
q->last_pkt = pkt_list;
q->nb_packets++;
q->size += pkt_list->pkt.size;
// 发个条件变量的信号:重启等待q->cond条件变量的一个线程
SDL_CondSignal(q->cond);
SDL_UnlockMutex(q->mutex);
return 0;
}
// 读队列头部。
int packet_queue_get(packet_queue_t *q, AVPacket *pkt, int block)
{
AVPacketList *p_pkt_node;
int ret;
SDL_LockMutex(q->mutex);
while (1)
{
p_pkt_node = q->first_pkt;
if (p_pkt_node) // 队列非空,取一个出来
{
q->first_pkt = p_pkt_node->next;
if (!q->first_pkt)
{
q->last_pkt = NULL;
}
q->nb_packets--;
q->size -= p_pkt_node->pkt.size;
*pkt = p_pkt_node->pkt;
av_free(p_pkt_node);
ret = 1;
break;
}
else if (!block) // 队列空且阻塞标志无效,则立即退出
{
ret = 0;
break;
}
else // 队列空且阻塞标志有效,则等待
{
SDL_CondWait(q->cond, q->mutex);
}
}
SDL_UnlockMutex(q->mutex);
return ret;
}
int packet_queue_put_nullpacket(packet_queue_t *q, int stream_index)
{
AVPacket pkt1, *pkt = &pkt1;
av_init_packet(pkt);
pkt->data = NULL;
pkt->size = 0;
pkt->stream_index = stream_index;
return packet_queue_put(q, pkt);
}
void packet_queue_flush(packet_queue_t *q)
{
AVPacketList *pkt, *pkt1;
SDL_LockMutex(q->mutex);
for (pkt = q->first_pkt; pkt; pkt = pkt1) {
pkt1 = pkt->next;
av_packet_unref(&pkt->pkt);
av_freep(&pkt);
}
q->last_pkt = NULL;
q->first_pkt = NULL;
q->nb_packets = 0;
q->size = 0;
q->duration = 0;
SDL_UnlockMutex(q->mutex);
}
void packet_queue_destroy(packet_queue_t *q)
{
packet_queue_flush(q);
SDL_DestroyMutex(q->mutex);
SDL_DestroyCond(q->cond);
}
void packet_queue_abort(packet_queue_t *q)
{
SDL_LockMutex(q->mutex);
q->abort_request = 1;
SDL_CondSignal(q->cond);
SDL_UnlockMutex(q->mutex);
}
packet.h
#ifndef __PACKET_H__
#define __PACKET_H__
#include "player.h"
int packet_queue_init(packet_queue_t *q);
int packet_queue_put(packet_queue_t *q, AVPacket *pkt);
int packet_queue_get(packet_queue_t *q, AVPacket *pkt, int block);
int packet_queue_put_nullpacket(packet_queue_t *q, int stream_index);
void packet_queue_destroy(packet_queue_t *q);
void packet_queue_abort(packet_queue_t *q);
#endif
player.cpp
/*******************************************************************************
* player.c
*
* history:
* 2018-11-27 - [lei] Create file: a simplest ffmpeg player
* 2018-12-01 - [lei] Playing audio
* 2018-12-06 - [lei] Playing audio&vidio
* 2019-01-06 - [lei] Add audio resampling, fix bug of unsupported audio
* format(such as planar)
* 2019-01-16 - [lei] Sync video to audio.
*
* details:
* A simple ffmpeg player.
*
* refrence:
* ffplay.c in FFmpeg 4.1 project.
*******************************************************************************/
#include <stdio.h>
#include <stdbool.h>
#include <assert.h>
#include "player.h"
#include "frame.h"
#include "packet.h"
#include "demux.h"
#include "video.h"
#include "audio.h"
static player_stat_t *player_init(const char *p_input_file);
static int player_deinit(player_stat_t *is);
// 返回值:返回上一帧的pts更新值(上一帧pts+流逝的时间)
double get_clock(play_clock_t *c)
{
if (*c->queue_serial != c->serial)
{
return NAN;
}
if (c->paused)
{
return c->pts;
}
else
{
double time = av_gettime_relative() / 1000000.0;
double ret = c->pts_drift + time; // 展开得: c->pts + (time - c->last_updated)
return ret;
}
}
void set_clock_at(play_clock_t *c, double pts, int serial, double time)
{
c->pts = pts;
c->last_updated = time;
c->pts_drift = c->pts - time;
c->serial = serial;
}
void set_clock(play_clock_t *c, double pts, int serial)
{
double time = av_gettime_relative() / 1000000.0;
set_clock_at(c, pts, serial, time);
}
static void set_clock_speed(play_clock_t *c, double speed)
{
set_clock(c, get_clock(c), c->serial);
c->speed = speed;
}
void init_clock(play_clock_t *c, int *queue_serial)
{
c->speed = 1.0;
c->paused = 0;
c->queue_serial = queue_serial;
set_clock(c, NAN, -1);
}
static void sync_play_clock_to_slave(play_clock_t *c, play_clock_t *slave)
{
double clock = get_clock(c);
double slave_clock = get_clock(slave);
if (!isnan(slave_clock) && (isnan(clock) || fabs(clock - slave_clock) > AV_NOSYNC_THRESHOLD))
set_clock(c, slave_clock, slave->serial);
}
static void do_exit(player_stat_t *is)
{
if (is)
{
player_deinit(is);
}
if (is->sdl_video.renderer)
SDL_DestroyRenderer(is->sdl_video.renderer);
if (is->sdl_video.window)
SDL_DestroyWindow(is->sdl_video.window);
avformat_network_deinit();
SDL_Quit();
exit(0);
}
static player_stat_t *player_init(const char *p_input_file)
{
player_stat_t *is;
is = (player_stat_t*) av_mallocz(sizeof(player_stat_t));
if (!is)
{
return NULL;
}
is->filename = av_strdup(p_input_file);
if (is->filename == NULL)
{
goto fail;
}
/* start video display */
if (frame_queue_init(&is->video_frm_queue, &is->video_pkt_queue, VIDEO_PICTURE_QUEUE_SIZE, 1) < 0 ||
frame_queue_init(&is->audio_frm_queue, &is->audio_pkt_queue, SAMPLE_QUEUE_SIZE, 1) < 0)
{
goto fail;
}
if (packet_queue_init(&is->video_pkt_queue) < 0 ||
packet_queue_init(&is->audio_pkt_queue) < 0)
{
goto fail;
}
AVPacket flush_pkt;
flush_pkt.data = NULL;
packet_queue_put(&is->video_pkt_queue, &flush_pkt);
packet_queue_put(&is->audio_pkt_queue, &flush_pkt);
if (!(is->continue_read_thread = SDL_CreateCond()))
{
av_log(NULL, AV_LOG_FATAL, "SDL_CreateCond(): %s\n", SDL_GetError());
fail:
player_deinit(is);
goto fail;
}
init_clock(&is->video_clk, &is->video_pkt_queue.serial);
init_clock(&is->audio_clk, &is->audio_pkt_queue.serial);
is->abort_request = 0;
if (SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER))
{
av_log(NULL, AV_LOG_FATAL, "Could not initialize SDL - %s\n", SDL_GetError());
av_log(NULL, AV_LOG_FATAL, "(Did you set the DISPLAY variable?)\n");
exit(1);
}
return is;
}
static int player_deinit(player_stat_t *is)
{
/* XXX: use a special url_shutdown call to abort parse cleanly */
is->abort_request = 1;
SDL_WaitThread(is->read_tid, NULL);
/* close each stream */
if (is->audio_idx >= 0)
{
// stream_component_close(is, is->audio_idx);
}
if (is->video_idx >= 0)
{
// stream_component_close(is, is->video_idx);
}
avformat_close_input(&is->p_fmt_ctx);
packet_queue_abort(&is->video_pkt_queue);
packet_queue_abort(&is->audio_pkt_queue);
packet_queue_destroy(&is->video_pkt_queue);
packet_queue_destroy(&is->audio_pkt_queue);
/* free all pictures */
frame_queue_destory(&is->video_frm_queue);
frame_queue_destory(&is->audio_frm_queue);
SDL_DestroyCond(is->continue_read_thread);
sws_freeContext(is->img_convert_ctx);
av_free(is->filename);
if (is->sdl_video.texture)
{
SDL_DestroyTexture(is->sdl_video.texture);
}
av_free(is);
return 0;
}
/* pause or resume the video */
static void stream_toggle_pause(player_stat_t *is)
{
if (is->paused)
{
// 这里表示当前是暂停状态,将切换到继续播放状态。在继续播放之前,先将暂停期间流逝的时间加到frame_timer中
is->frame_timer += av_gettime_relative() / 1000000.0 - is->video_clk.last_updated;
set_clock(&is->video_clk, get_clock(&is->video_clk), is->video_clk.serial);
}
is->paused = is->audio_clk.paused = is->video_clk.paused = !is->paused;
}
static void toggle_pause(player_stat_t *is)
{
stream_toggle_pause(is);
is->step = 0;
}
int player_running(const char *p_input_file)
{
player_stat_t *is = NULL;
is = player_init(p_input_file);
if (is == NULL)
{
printf("player init failed\n");
do_exit(is);
}
open_demux(is);
open_video(is);
open_audio(is);
SDL_Event event;
while (1)
{
SDL_PumpEvents();
// SDL event队列为空,则在while循环中播放视频帧。否则从队列头部取一个event,退出当前函数,在上级函数中处理event
while (!SDL_PeepEvents(&event, 1, SDL_GETEVENT, SDL_FIRSTEVENT, SDL_LASTEVENT))
{
av_usleep(100000);
SDL_PumpEvents();
}
switch (event.type) {
case SDL_KEYDOWN:
if (event.key.keysym.sym == SDLK_ESCAPE)
{
do_exit(is);
break;
}
switch (event.key.keysym.sym) {
case SDLK_SPACE: // 空格键:暂停
toggle_pause(is);
break;
case SDL_WINDOWEVENT:
break;
default:
break;
}
break;
case SDL_QUIT:
case FF_QUIT_EVENT:
do_exit(is);
break;
default:
break;
}
}
return 0;
}
player.h
#ifndef __PLAYER_H__
#define __PLAYER_H__
#include <stdio.h>
#include <stdint.h>
#include <stdbool.h>
extern "C"
{
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libswscale/swscale.h>
#include <libswresample/swresample.h>
#include <libavutil/frame.h>
#include <libavutil/time.h>
#include <libavutil/imgutils.h>
#include <SDL2/SDL.h>
#include <SDL2/SDL_video.h>
#include <SDL2/SDL_render.h>
#include <SDL2/SDL_rect.h>
#include <SDL2/SDL_mutex.h>
}
/* no AV sync correction is done if below the minimum AV sync threshold */
#define AV_SYNC_THRESHOLD_MIN 0.04
/* AV sync correction is done if above the maximum AV sync threshold */
#define AV_SYNC_THRESHOLD_MAX 0.1
/* If a frame duration is longer than this, it will not be duplicated to compensate AV sync */
#define AV_SYNC_FRAMEDUP_THRESHOLD 0.1
/* no AV correction is done if too big error */
#define AV_NOSYNC_THRESHOLD 10.0
/* polls for possible required screen refresh at least this often, should be less than 1/fps */
#define REFRESH_RATE 0.01
#define SDL_AUDIO_BUFFER_SIZE 1024
#define MAX_AUDIO_FRAME_SIZE 192000
#define MAX_QUEUE_SIZE (15 * 1024 * 1024)
#define MIN_FRAMES 25
/* Minimum SDL audio buffer size, in samples. */
#define SDL_AUDIO_MIN_BUFFER_SIZE 512
/* Calculate actual buffer size keeping in mind not cause too frequent audio callbacks */
#define SDL_AUDIO_MAX_CALLBACKS_PER_SEC 30
#define VIDEO_PICTURE_QUEUE_SIZE 3
#define SUBPICTURE_QUEUE_SIZE 16
#define SAMPLE_QUEUE_SIZE 9
#define FRAME_QUEUE_SIZE FFMAX(SAMPLE_QUEUE_SIZE, FFMAX(VIDEO_PICTURE_QUEUE_SIZE, SUBPICTURE_QUEUE_SIZE))
#define FF_QUIT_EVENT (SDL_USEREVENT + 2)
typedef struct {
double pts; // 当前帧(待播放)显示时间戳,播放后,当前帧变成上一帧
double pts_drift; // 当前帧显示时间戳与当前系统时钟时间的差值
double last_updated; // 当前时钟(如视频时钟)最后一次更新时间,也可称当前时钟时间
double speed; // 时钟速度控制,用于控制播放速度
int serial; // 播放序列,所谓播放序列就是一段连续的播放动作,一个seek操作会启动一段新的播放序列
int paused; // 暂停标志
int *queue_serial; // 指向packet_serial
} play_clock_t;
typedef struct {
int freq;
int channels;
int64_t channel_layout;
enum AVSampleFormat fmt;
int frame_size;
int bytes_per_sec;
} audio_param_t;
typedef struct {
SDL_Window *window;
SDL_Renderer *renderer;
SDL_Texture *texture;
SDL_Rect rect;
} sdl_video_t;
typedef struct packet_queue_t {
AVPacketList *first_pkt, *last_pkt;
int nb_packets; // 队列中packet的数量
int size; // 队列所占内存空间大小
int64_t duration; // 队列中所有packet总的播放时长
int abort_request;
int serial; // 播放序列,所谓播放序列就是一段连续的播放动作,一个seek操作会启动一段新的播放序列
SDL_mutex *mutex;
SDL_cond *cond;
} packet_queue_t;
/* Common struct for handling all types of decoded data and allocated render buffers. */
typedef struct {
AVFrame *frame;
int serial;
double pts; /* presentation timestamp for the frame */
double duration; /* estimated duration of the frame */
int64_t pos; // frame对应的packet在输入文件中的地址偏移
int width;
int height;
int format;
AVRational sar;
int uploaded;
int flip_v;
} frame_t;
typedef struct {
frame_t queue[FRAME_QUEUE_SIZE];
int rindex; // 读索引。待播放时读取此帧进行播放,播放后此帧成为上一帧
int windex; // 写索引
int size; // 总帧数
int max_size; // 队列可存储最大帧数
int keep_last;
int rindex_shown; // 当前是否有帧在显示
SDL_mutex *mutex;
SDL_cond *cond;
packet_queue_t *pktq; // 指向对应的packet_queue
} frame_queue_t;
typedef struct {
char *filename;
AVFormatContext *p_fmt_ctx;
AVStream *p_audio_stream;
AVStream *p_video_stream;
AVCodecContext *p_acodec_ctx;
AVCodecContext *p_vcodec_ctx;
int audio_idx;
int video_idx;
sdl_video_t sdl_video;
play_clock_t audio_clk; // 音频时钟
play_clock_t video_clk; // 视频时钟
double frame_timer;
packet_queue_t audio_pkt_queue;
packet_queue_t video_pkt_queue;
frame_queue_t audio_frm_queue;
frame_queue_t video_frm_queue;
struct SwsContext *img_convert_ctx;
struct SwrContext *audio_swr_ctx;
AVFrame *p_frm_yuv;
audio_param_t audio_param_src;
audio_param_t audio_param_tgt;
int audio_hw_buf_size; // SDL音频缓冲区大小(单位字节)
uint8_t *p_audio_frm; // 指向待播放的一帧音频数据,指向的数据区将被拷入SDL音频缓冲区。若经过重采样则指向audio_frm_rwr,否则指向frame中的音频
uint8_t *audio_frm_rwr; // 音频重采样的输出缓冲区
unsigned int audio_frm_size; // 待播放的一帧音频数据(audio_buf指向)的大小
unsigned int audio_frm_rwr_size; // 申请到的音频缓冲区audio_frm_rwr的实际尺寸
int audio_cp_index; // 当前音频帧中已拷入SDL音频缓冲区的位置索引(指向第一个待拷贝字节)
int audio_write_buf_size; // 当前音频帧中尚未拷入SDL音频缓冲区的数据量,audio_frm_size = audio_cp_index + audio_write_buf_size
double audio_clock;
int audio_clock_serial;
int abort_request;
int paused;
int step;
SDL_cond *continue_read_thread;
SDL_Thread *read_tid; // demux解复用线程
} player_stat_t;
int player_running(const char *p_input_file);
double get_clock(play_clock_t *c);
void set_clock_at(play_clock_t *c, double pts, int serial, double time);
void set_clock(play_clock_t *c, double pts, int serial);
#endif
video.cpp
#include "video.h"
#include "packet.h"
#include "frame.h"
#include "player.h"
static int queue_picture(player_stat_t *is, AVFrame *src_frame, double pts, double duration, int64_t pos)
{
frame_t *vp;
if (!(vp = frame_queue_peek_writable(&is->video_frm_queue)))
return -1;
vp->sar = src_frame->sample_aspect_ratio;
vp->uploaded = 0;
vp->width = src_frame->width;
vp->height = src_frame->height;
vp->format = src_frame->format;
vp->pts = pts;
vp->duration = duration;
vp->pos = pos;
//vp->serial = serial;
//set_default_window_size(vp->width, vp->height, vp->sar);
// 将AVFrame拷入队列相应位置
av_frame_move_ref(vp->frame, src_frame);
// 更新队列计数及写索引
frame_queue_push(&is->video_frm_queue);
return 0;
}
// 从packet_queue中取一个packet,解码生成frame
static int video_decode_frame(AVCodecContext *p_codec_ctx, packet_queue_t *p_pkt_queue, AVFrame *frame)
{
int ret;
while (1)
{
AVPacket pkt;
while (1)
{
// 3. 从解码器接收frame
// 3.1 一个视频packet含一个视频frame
// 解码器缓存一定数量的packet后,才有解码后的frame输出
// frame输出顺序是按pts的顺序,如IBBPBBP
// frame->pkt_pos变量是此frame对应的packet在视频文件中的偏移地址,值同pkt.pos
ret = avcodec_receive_frame(p_codec_ctx, frame);
if (ret < 0)
{
if (ret == AVERROR_EOF)
{
av_log(NULL, AV_LOG_INFO, "video avcodec_receive_frame(): the decoder has been fully flushed\n");
avcodec_flush_buffers(p_codec_ctx);
return 0;
}
else if (ret == AVERROR(EAGAIN))
{
// av_log(NULL, AV_LOG_INFO, "video avcodec_receive_frame(): output is not available in this state - "
// "user must try to send new input\n");
break;
}
else
{
av_log(NULL, AV_LOG_ERROR, "video avcodec_receive_frame(): other errors\n");
continue;
}
}
else
{
frame->pts = frame->best_effort_timestamp;
//frame->pts = frame->pkt_dts;
return 1; // 成功解码得到一个视频帧或一个音频帧,则返回
}
}
// 1. 取出一个packet。使用pkt对应的serial赋值给d->pkt_serial
if (packet_queue_get(p_pkt_queue, &pkt, true) < 0)
{
return -1;
}
if (pkt.data == NULL)
{
// 复位解码器内部状态/刷新内部缓冲区。
avcodec_flush_buffers(p_codec_ctx);
}
else
{
// 2. 将packet发送给解码器
// 发送packet的顺序是按dts递增的顺序,如IPBBPBB
// pkt.pos变量可以标识当前packet在视频文件中的地址偏移
if (avcodec_send_packet(p_codec_ctx, &pkt) == AVERROR(EAGAIN))
{
av_log(NULL, AV_LOG_ERROR, "receive_frame and send_packet both returned EAGAIN, which is an API violation.\n");
}
av_packet_unref(&pkt);
}
}
}
// 将视频包解码得到视频帧,然后写入picture队列
static int video_decode_thread(void *arg)
{
player_stat_t *is = (player_stat_t *)arg;
AVFrame *p_frame = av_frame_alloc();
double pts;
double duration;
int ret;
int got_picture;
AVRational tb = is->p_video_stream->time_base;
AVRational frame_rate = av_guess_frame_rate(is->p_fmt_ctx, is->p_video_stream, NULL);
if (p_frame == NULL)
{
av_log(NULL, AV_LOG_ERROR, "av_frame_alloc() for p_frame failed\n");
return AVERROR(ENOMEM);
}
while (1)
{
got_picture = video_decode_frame(is->p_vcodec_ctx, &is->video_pkt_queue, p_frame);
if (got_picture < 0)
{
goto exit;
}
duration = (frame_rate.num && frame_rate.den ? av_q2d((AVRational){frame_rate.den, frame_rate.num}) : 0); // 当前帧播放时长
pts = (p_frame->pts == AV_NOPTS_VALUE) ? NAN : p_frame->pts * av_q2d(tb); // 当前帧显示时间戳
ret = queue_picture(is, p_frame, pts, duration, p_frame->pkt_pos); // 将当前帧压入frame_queue
av_frame_unref(p_frame);
if (ret < 0)
{
goto exit;
}
}
exit:
av_frame_free(&p_frame);
return 0;
}
// 根据视频时钟与同步时钟(如音频时钟)的差值,校正delay值,使视频时钟追赶或等待同步时钟
// 输入参数delay是上一帧播放时长,即上一帧播放后应延时多长时间后再播放当前帧,通过调节此值来调节当前帧播放快慢
// 返回值delay是将输入参数delay经校正后得到的值
static double compute_target_delay(double delay, player_stat_t *is)
{
double sync_threshold, diff = 0;
/* update delay to follow master synchronisation source */
/* if video is slave, we try to correct big delays by
duplicating or deleting a frame */
// 视频时钟与同步时钟(如音频时钟)的差异,时钟值是上一帧pts值(实为:上一帧pts + 上一帧至今流逝的时间差)
diff = get_clock(&is->video_clk) - get_clock(&is->audio_clk);
// delay是上一帧播放时长:当前帧(待播放的帧)播放时间与上一帧播放时间差理论值
// diff是视频时钟与同步时钟的差值
/* skip or repeat frame. We take into account the
delay to compute the threshold. I still don't know
if it is the best guess */
// 若delay < AV_SYNC_THRESHOLD_MIN,则同步域值为AV_SYNC_THRESHOLD_MIN
// 若delay > AV_SYNC_THRESHOLD_MAX,则同步域值为AV_SYNC_THRESHOLD_MAX
// 若AV_SYNC_THRESHOLD_MIN < delay < AV_SYNC_THRESHOLD_MAX,则同步域值为delay
sync_threshold = FFMAX(AV_SYNC_THRESHOLD_MIN, FFMIN(AV_SYNC_THRESHOLD_MAX, delay));
if (!isnan(diff))
{
if (diff <= -sync_threshold) // 视频时钟落后于同步时钟,且超过同步域值
delay = FFMAX(0, delay + diff); // 当前帧播放时刻落后于同步时钟(delay+diff<0)则delay=0(视频追赶,立即播放),否则delay=delay+diff
else if (diff >= sync_threshold && delay > AV_SYNC_FRAMEDUP_THRESHOLD) // 视频时钟超前于同步时钟,且超过同步域值,但上一帧播放时长超长
delay = delay + diff; // 仅仅校正为delay=delay+diff,主要是AV_SYNC_FRAMEDUP_THRESHOLD参数的作用
else if (diff >= sync_threshold) // 视频时钟超前于同步时钟,且超过同步域值
delay = 2 * delay; // 视频播放要放慢脚步,delay扩大至2倍
}
av_log(NULL, AV_LOG_TRACE, "video: delay=%0.3f A-V=%f\n", delay, -diff);
return delay;
}
static double vp_duration(player_stat_t *is, frame_t *vp, frame_t *nextvp) {
if (vp->serial == nextvp->serial)
{
double duration = nextvp->pts - vp->pts;
if (isnan(duration) || duration <= 0)
return vp->duration;
else
return duration;
} else {
return 0.0;
}
}
static void update_video_pts(player_stat_t *is, double pts, int64_t pos, int serial) {
/* update current video pts */
set_clock(&is->video_clk, pts, serial); // 更新vidclock
//-sync_clock_to_slave(&is->extclk, &is->vidclk); // 将extclock同步到vidclock
}
static void video_display(player_stat_t *is)
{
frame_t *vp;
vp = frame_queue_peek_last(&is->video_frm_queue);
// 图像转换:p_frm_raw->data ==> p_frm_yuv->data
// 将源图像中一片连续的区域经过处理后更新到目标图像对应区域,处理的图像区域必须逐行连续
// plane: 如YUV有Y、U、V三个plane,RGB有R、G、B三个plane
// slice: 图像中一片连续的行,必须是连续的,顺序由顶部到底部或由底部到顶部
// stride/pitch: 一行图像所占的字节数,Stride=BytesPerPixel*Width+Padding,注意对齐
// AVFrame.*data[]: 每个数组元素指向对应plane
// AVFrame.linesize[]: 每个数组元素表示对应plane中一行图像所占的字节数
sws_scale(is->img_convert_ctx, // sws context
(const uint8_t *const *)vp->frame->data,// src slice
vp->frame->linesize, // src stride
0, // src slice y
is->p_vcodec_ctx->height, // src slice height
is->p_frm_yuv->data, // dst planes
is->p_frm_yuv->linesize // dst strides
);
// 使用新的YUV像素数据更新SDL_Rect
SDL_UpdateYUVTexture(is->sdl_video.texture, // sdl texture
&is->sdl_video.rect, // sdl rect
is->p_frm_yuv->data[0], // y plane
is->p_frm_yuv->linesize[0], // y pitch
is->p_frm_yuv->data[1], // u plane
is->p_frm_yuv->linesize[1], // u pitch
is->p_frm_yuv->data[2], // v plane
is->p_frm_yuv->linesize[2] // v pitch
);
// 使用特定颜色清空当前渲染目标
SDL_RenderClear(is->sdl_video.renderer);
// 使用部分图像数据(texture)更新当前渲染目标
SDL_RenderCopy(is->sdl_video.renderer, // sdl renderer
is->sdl_video.texture, // sdl texture
NULL, // src rect, if NULL copy texture
&is->sdl_video.rect // dst rect
);
// 执行渲染,更新屏幕显示
SDL_RenderPresent(is->sdl_video.renderer);
}
/* called to display each frame */
static void video_refresh(void *opaque, double *remaining_time)
{
player_stat_t *is = (player_stat_t *)opaque;
double time;
static bool first_frame = true;
retry:
if (frame_queue_nb_remaining(&is->video_frm_queue) == 0) // 所有帧已显示
{
// nothing to do, no picture to display in the queue
return;
}
double last_duration, duration, delay;
frame_t *vp, *lastvp;
/* dequeue the picture */
lastvp = frame_queue_peek_last(&is->video_frm_queue); // 上一帧:上次已显示的帧
vp = frame_queue_peek(&is->video_frm_queue); // 当前帧:当前待显示的帧
// lastvp和vp不是同一播放序列(一个seek会开始一个新播放序列),将frame_timer更新为当前时间
if (first_frame)
{
is->frame_timer = av_gettime_relative() / 1000000.0;
first_frame = false;
}
// 暂停处理:不停播放上一帧图像
if (is->paused)
goto display;
/* compute nominal last_duration */
last_duration = vp_duration(is, lastvp, vp); // 上一帧播放时长:vp->pts - lastvp->pts
delay = compute_target_delay(last_duration, is); // 根据视频时钟和同步时钟的差值,计算delay值
time= av_gettime_relative()/1000000.0;
// 当前帧播放时刻(is->frame_timer+delay)大于当前时刻(time),表示播放时刻未到
if (time < is->frame_timer + delay) {
// 播放时刻未到,则更新刷新时间remaining_time为当前时刻到下一播放时刻的时间差
*remaining_time = FFMIN(is->frame_timer + delay - time, *remaining_time);
// 播放时刻未到,则不播放,直接返回
return;
}
// 更新frame_timer值
is->frame_timer += delay;
// 校正frame_timer值:若frame_timer落后于当前系统时间太久(超过最大同步域值),则更新为当前系统时间
if (delay > 0 && time - is->frame_timer > AV_SYNC_THRESHOLD_MAX)
{
is->frame_timer = time;
}
SDL_LockMutex(is->video_frm_queue.mutex);
if (!isnan(vp->pts))
{
update_video_pts(is, vp->pts, vp->pos, vp->serial); // 更新视频时钟:时间戳、时钟时间
}
SDL_UnlockMutex(is->video_frm_queue.mutex);
// 是否要丢弃未能及时播放的视频帧
if (frame_queue_nb_remaining(&is->video_frm_queue) > 1) // 队列中未显示帧数>1(只有一帧则不考虑丢帧)
{
frame_t *nextvp = frame_queue_peek_next(&is->video_frm_queue); // 下一帧:下一待显示的帧
duration = vp_duration(is, vp, nextvp); // 当前帧vp播放时长 = nextvp->pts - vp->pts
// 当前帧vp未能及时播放,即下一帧播放时刻(is->frame_timer+duration)小于当前系统时刻(time)
if (time > is->frame_timer + duration)
{
frame_queue_next(&is->video_frm_queue); // 删除上一帧已显示帧,即删除lastvp,读指针加1(从lastvp更新到vp)
goto retry;
}
}
// 删除当前读指针元素,读指针+1。若未丢帧,读指针从lastvp更新到vp;若有丢帧,读指针从vp更新到nextvp
frame_queue_next(&is->video_frm_queue);
display:
video_display(is); // 取出当前帧vp(若有丢帧是nextvp)进行播放
}
static int video_playing_thread(void *arg)
{
player_stat_t *is = (player_stat_t *)arg;
double remaining_time = 0.0;
while (1)
{
if (remaining_time > 0.0)
{
av_usleep((unsigned)(remaining_time * 1000000.0));
}
remaining_time = REFRESH_RATE;
// 立即显示当前帧,或延时remaining_time后再显示
video_refresh(is, &remaining_time);
}
return 0;
}
static int open_video_playing(void *arg)
{
player_stat_t *is = (player_stat_t *)arg;
int ret;
int buf_size;
uint8_t* buffer = NULL;
is->p_frm_yuv = av_frame_alloc();
if (is->p_frm_yuv == NULL)
{
printf("av_frame_alloc() for p_frm_raw failed\n");
return -1;
}
// 为AVFrame.*data[]手工分配缓冲区,用于存储sws_scale()中目的帧视频数据
buf_size = av_image_get_buffer_size(AV_PIX_FMT_YUV420P,
is->p_vcodec_ctx->width,
is->p_vcodec_ctx->height,
1
);
// buffer将作为p_frm_yuv的视频数据缓冲区
buffer = (uint8_t *)av_malloc(buf_size);
if (buffer == NULL)
{
printf("av_malloc() for buffer failed\n");
return -1;
}
// 使用给定参数设定p_frm_yuv->data和p_frm_yuv->linesize
ret = av_image_fill_arrays(is->p_frm_yuv->data, // dst data[]
is->p_frm_yuv->linesize, // dst linesize[]
buffer, // src buffer
AV_PIX_FMT_YUV420P, // pixel format
is->p_vcodec_ctx->width, // width
is->p_vcodec_ctx->height,// height
1 // align
);
if (ret < 0)
{
printf("av_image_fill_arrays() failed %d\n", ret);
return -1;;
}
// A2. 初始化SWS context,用于后续图像转换
// 此处第6个参数使用的是FFmpeg中的像素格式,对比参考注释B3
// FFmpeg中的像素格式AV_PIX_FMT_YUV420P对应SDL中的像素格式SDL_PIXELFORMAT_IYUV
// 如果解码后得到图像的不被SDL支持,不进行图像转换的话,SDL是无法正常显示图像的
// 如果解码后得到图像的能被SDL支持,则不必进行图像转换
// 这里为了编码简便,统一转换为SDL支持的格式AV_PIX_FMT_YUV420P==>SDL_PIXELFORMAT_IYUV
is->img_convert_ctx = sws_getContext(is->p_vcodec_ctx->width, // src width
is->p_vcodec_ctx->height, // src height
is->p_vcodec_ctx->pix_fmt, // src format
is->p_vcodec_ctx->width, // dst width
is->p_vcodec_ctx->height, // dst height
AV_PIX_FMT_YUV420P, // dst format
SWS_BICUBIC, // flags
NULL, // src filter
NULL, // dst filter
NULL // param
);
if (is->img_convert_ctx == NULL)
{
printf("sws_getContext() failed\n");
return -1;
}
// SDL_Rect赋值
is->sdl_video.rect.x = 0;
is->sdl_video.rect.y = 0;
is->sdl_video.rect.w = is->p_vcodec_ctx->width;
is->sdl_video.rect.h = is->p_vcodec_ctx->height;
// 1. 创建SDL窗口,SDL 2.0支持多窗口
// SDL_Window即运行程序后弹出的视频窗口,同SDL 1.x中的SDL_Surface
is->sdl_video.window = SDL_CreateWindow("simple ffplayer",
SDL_WINDOWPOS_UNDEFINED,// 不关心窗口X坐标
SDL_WINDOWPOS_UNDEFINED,// 不关心窗口Y坐标
is->sdl_video.rect.w,
is->sdl_video.rect.h,
SDL_WINDOW_OPENGL
);
if (is->sdl_video.window == NULL)
{
printf("SDL_CreateWindow() failed: %s\n", SDL_GetError());
return -1;
}
// 2. 创建SDL_Renderer
// SDL_Renderer:渲染器
is->sdl_video.renderer = SDL_CreateRenderer(is->sdl_video.window, -1, 0);
if (is->sdl_video.renderer == NULL)
{
printf("SDL_CreateRenderer() failed: %s\n", SDL_GetError());
return -1;
}
// 3. 创建SDL_Texture
// 一个SDL_Texture对应一帧YUV数据,同SDL 1.x中的SDL_Overlay
is->sdl_video.texture = SDL_CreateTexture(is->sdl_video.renderer,
SDL_PIXELFORMAT_IYUV,
SDL_TEXTUREACCESS_STREAMING,
is->sdl_video.rect.w,
is->sdl_video.rect.h
);
if (is->sdl_video.texture == NULL)
{
printf("SDL_CreateTexture() failed: %s\n", SDL_GetError());
return -1;
}
SDL_CreateThread(video_playing_thread, "video playing thread", is);
return 0;
}
static int open_video_stream(player_stat_t *is)
{
AVCodecParameters* p_codec_par = NULL;
AVCodec* p_codec = NULL;
AVCodecContext* p_codec_ctx = NULL;
AVStream *p_stream = is->p_video_stream;
int ret;
// 1. 为视频流构建解码器AVCodecContext
// 1.1 获取解码器参数AVCodecParameters
p_codec_par = p_stream->codecpar;
// 1.2 获取解码器
p_codec = avcodec_find_decoder(p_codec_par->codec_id);
if (p_codec == NULL)
{
printf("Cann't find codec!\n");
return -1;
}
// 1.3 构建解码器AVCodecContext
// 1.3.1 p_codec_ctx初始化:分配结构体,使用p_codec初始化相应成员为默认值
p_codec_ctx = avcodec_alloc_context3(p_codec);
if (p_codec_ctx == NULL)
{
printf("avcodec_alloc_context3() failed\n");
return -1;
}
// 1.3.2 p_codec_ctx初始化:p_codec_par ==> p_codec_ctx,初始化相应成员
ret = avcodec_parameters_to_context(p_codec_ctx, p_codec_par);
if (ret < 0)
{
printf("avcodec_parameters_to_context() failed\n");
return -1;
}
// 1.3.3 p_codec_ctx初始化:使用p_codec初始化p_codec_ctx,初始化完成
ret = avcodec_open2(p_codec_ctx, p_codec, NULL);
if (ret < 0)
{
printf("avcodec_open2() failed %d\n", ret);
return -1;
}
is->p_vcodec_ctx = p_codec_ctx;
// 2. 创建视频解码线程
SDL_CreateThread(video_decode_thread, "video decode thread", is);
return 0;
}
int open_video(player_stat_t *is)
{
open_video_stream(is);
open_video_playing(is);
return 0;
}
video.h
#ifndef __VIDEO_H__
#define __VIDEO_H__
#include "player.h"
int open_video(player_stat_t *is);
#endif
工程下载
问题总结
播放声音的时候变快或者变慢
解决办法:
当我们打开音频设备的时候,大部分代码用的都是SDL_OpenAudio,该函数总算打开ID 1.
改为调用SDL_OpenAudioDevice。
因为有些设备默认音频设备不是1.
请注意二者的区别。
/**
* This function opens the audio device with the desired parameters, and
* returns 0 if successful, placing the actual hardware parameters in the
* structure pointed to by \c obtained. If \c obtained is NULL, the audio
* data passed to the callback function will be guaranteed to be in the
* requested format, and will be automatically converted to the hardware
* audio format if necessary. This function returns -1 if it failed
* to open the audio device, or couldn't set up the audio thread.
*
* When filling in the desired audio spec structure,
* - \c desired->freq should be the desired audio frequency in samples-per-
* second.
* - \c desired->format should be the desired audio format.
* - \c desired->samples is the desired size of the audio buffer, in
* samples. This number should be a power of two, and may be adjusted by
* the audio driver to a value more suitable for the hardware. Good values
* seem to range between 512 and 8096 inclusive, depending on the
* application and CPU speed. Smaller values yield faster response time,
* but can lead to underflow if the application is doing heavy processing
* and cannot fill the audio buffer in time. A stereo sample consists of
* both right and left channels in LR ordering.
* Note that the number of samples is directly related to time by the
* following formula: \code ms = (samples*1000)/freq \endcode
* - \c desired->size is the size in bytes of the audio buffer, and is
* calculated by SDL_OpenAudio().
* - \c desired->silence is the value used to set the buffer to silence,
* and is calculated by SDL_OpenAudio().
* - \c desired->callback should be set to a function that will be called
* when the audio device is ready for more data. It is passed a pointer
* to the audio buffer, and the length in bytes of the audio buffer.
* This function usually runs in a separate thread, and so you should
* protect data structures that it accesses by calling SDL_LockAudio()
* and SDL_UnlockAudio() in your code. Alternately, you may pass a NULL
* pointer here, and call SDL_QueueAudio() with some frequency, to queue
* more audio samples to be played (or for capture devices, call
* SDL_DequeueAudio() with some frequency, to obtain audio samples).
* - \c desired->userdata is passed as the first parameter to your callback
* function. If you passed a NULL callback, this value is ignored.
*
* The audio device starts out playing silence when it's opened, and should
* be enabled for playing by calling \c SDL_PauseAudio(0) when you are ready
* for your audio callback function to be called. Since the audio driver
* may modify the requested size of the audio buffer, you should allocate
* any local mixing buffers after you open the audio device.
*/
extern DECLSPEC int SDLCALL SDL_OpenAudio(SDL_AudioSpec * desired,
SDL_AudioSpec * obtained);
/**
* Open a specific audio device. Passing in a device name of NULL requests
* the most reasonable default (and is equivalent to calling SDL_OpenAudio()).
*
* The device name is a UTF-8 string reported by SDL_GetAudioDeviceName(), but
* some drivers allow arbitrary and driver-specific strings, such as a
* hostname/IP address for a remote audio server, or a filename in the
* diskaudio driver.
*
* \return 0 on error, a valid device ID that is >= 2 on success.
*
* SDL_OpenAudio(), unlike this function, always acts on device ID 1.
*/
extern DECLSPEC SDL_AudioDeviceID SDLCALL SDL_OpenAudioDevice(const char
*device,
int iscapture,
const
SDL_AudioSpec *
desired,
SDL_AudioSpec *
obtained,
int
allowed_changes);