计算MP3 帧的时长是 26ms的来历时,提到两个重要概念,一个是“每一帧的采样个数是 1152”,另外一个是“采样率44100 Hz ”
采样率是每秒钟的采样次数。如44.1kHz,就是说不管哪种波形,有序如正余弦,无序如不规则波形,每秒内采样都是441000次。就是说,采样率越大,越接近原始波形,越不失真。但是采样次数越多,数据自然越大,网络要考虑存储大小的和宽带的,在人耳听力范围内有一定大小的采样率就可以了,合适才是真理。
采样个数的英文可翻译为 number of audio samples (per channel) described by this frame
就是一帧数据里面有多少个采样(样本)或者称为sample 采样率为44.1khz的一秒有44.1k个sample 对于MP3 1152个sample就是一帧,播放1152个sample既1frame需要的时间就是 1152/44.1k 大约为26ms
音频相关内容
sample:样本 采样
SampleRate:采样频率 每秒采样的个数 如aac 44.1khz 每秒采样44.1k次(一秒采样44.1k个sample)
BitsPerSample:采样位数 采样位数可以理解为采集卡处理声音的解析度。这个数值越大,解析度就越高,录制和回放的声音就越真实。我们首先要知道:电脑中的声音文件是用数字0和1来表示的。连续的模拟信号按一定的采样频率经数码脉冲取样后,每一个离散的脉冲信号被以一定的量化精度量化成一串二进制编码流,这串编码流的位数即为采样位数,也称为量化精度。既获取用于存储某种音频格式的单个采样的音频信息的位数. 一般音频为16位 既2byte 2个字节.
Channels:声道数 获取这种音频格式所提供的声道数 如aac中一般为2
SamplesPerSecond:这种音频格式所提供的每秒采样数 如aac中一般为44.1k
Bit Per Second 比特率:
比特率是指每秒传送的比特(bit)数。单位为bps(Bit Per Second),比特率越高,传送的数据越大。在音频、视频领域,比特率常翻译为码率,比特率表示经过编码(压缩)后的音、视频数据每秒钟需要用多少个比特来表示,而比特就是二进制里面最小的单位,要么是0,要么是1。比特率与音、视频压缩的关系,简单的说就是比特率越高,音频、视频的质量就越好,但编码后的文件就越大;如果比特率越少则情况刚好相反。
比特率=采样率*采样位数*声道数
一段长度为1秒的音频数据占用的存储空间 = 每个声道占用的存储空间 * 声道个数 = (每秒sample个数*每个sample占用的存储空间)*(声道个数)
如 aac 44.1khz 2 channel bitsPerSample 16bit 其一秒占用的存储大小为: 44.1K * 2 * 16bit.
对于AAC编码格式
1024个sample为一个Frame,1024个样本为一个aac帧,对于44.1khz(一秒采样44.1k个sample)的aac音频格式一帧的播放时间是1024/44100(秒)= 23.2ms
对于MP3编码格式
如果为双声道MP3 1152个sample为一个Frame
FFmpeg中相关数据格式
AVPacket: 存储解码前数据(编码数据:H264/AAC等)
AVFrame: 存储解码后数据(像素数据:YUV/RGB/PCM等)存储原始数据(即非压缩数据,视频:YUV,RGB;音频:PCM)
/**
* This structure describes decoded (raw) audio or video data.
*
* AVFrame must be allocated using av_frame_alloc(). Note that this only
* allocates the AVFrame itself, the buffers for the data must be managed
* through other means (see below).
* AVFrame must be freed with av_frame_free().
*
* AVFrame is typically allocated once and then reused multiple times to hold
* different data (e.g. a single AVFrame to hold frames received from a
* decoder). In such a case, av_frame_unref() will free any references held by
* the frame and reset it to its original clean state before it
* is reused again.
*
* The data described by an AVFrame is usually reference counted through the
* AVBuffer API. The underlying buffer references are stored in AVFrame.buf /
* AVFrame.extended_buf. An AVFrame is considered to be reference counted if at
* least one reference is set, i.e. if AVFrame.buf[0] != NULL. In such a case,
* every single data plane must be contained in one of the buffers in
* AVFrame.buf or AVFrame.extended_buf.
* There may be a single buffer for all the data, or one separate buffer for
* each plane, or anything in between.
*
* sizeof(AVFrame) is not a part of the public ABI, so new fields may be added
* to the end with a minor bump.
*
* Fields can be accessed through AVOptions, the name string used, matches the
* C structure field name for fields accessible through AVOptions. The AVClass
* for AVFrame can be obtained from avcodec_get_frame_class()
*/
typedef struct AVFrame {
#define AV_NUM_DATA_POINTERS 8
/**
* pointer to the picture/channel planes.
* This might be different from the first allocated byte
*
* Some decoders access areas outside 0,0 - width,height, please
* see avcodec_align_dimensions2(). Some filters and swscale can read
* up to 16 bytes beyond the planes, if these filters are to be used,
* then 16 extra bytes must be allocated.
*
* NOTE: Except for hwaccel formats, pointers not needed by the format
* MUST be set to NULL.
*/
uint8_t *data[AV_NUM_DATA_POINTERS];
/**
* For video, size in bytes of each picture line.
* For audio, size in bytes of each plane.
*
* For audio, only linesize[0] may be set. For planar audio, each channel
* plane must be the same size.
*
* For video the linesizes should be multiples of the CPUs alignment
* preference, this is 16 or 32 for modern desktop CPUs.
* Some code requires such alignment other code can be slower without
* correct alignment, for yet other it makes no difference.
*
* @note The linesize may be larger than the size of usable data -- there
* may be extra padding present for performance reasons.
*/
int linesize[AV_NUM_DATA_POINTERS];
/**
* pointers to the data planes/channels.
*
* For video, this should simply point to data[].
*
* For planar audio, each channel has a separate data pointer, and
* linesize[0] contains the size of each channel buffer.
* For packed audio, there is just one data pointer, and linesize[0]
* contains the total size of the buffer for all channels.
*
* Note: Both data and extended_data should always be set in a valid frame,
* but for planar audio with more channels that can fit in data,
* extended_data must be used in order to access all channels.
*/
uint8_t **extended_data;
/**
* @name Video dimensions
* Video frames only. The coded dimensions (in pixels) of the video frame,
* i.e. the size of the rectangle that contains some well-defined values.
*
* @note The part of the frame intended for display/presentation is further
* restricted by the @ref cropping "Cropping rectangle".
* @{
*/
int width, height;
/**
* @}
*/
/**
* number of audio samples (per channel) described by this frame
*/
int nb_samples;
/**
* format of the frame, -1 if unknown or unset
* Values correspond to enum AVPixelFormat for video frames,
* enum AVSampleFormat for audio)
*/
int format;
/**
* 1 -> keyframe, 0-> not
*/
int key_frame;
/**
* Picture type of the frame.
*/
enum AVPictureType pict_type;
/**
* Sample aspect ratio for the video frame, 0/1 if unknown/unspecified.
*/
AVRational sample_aspect_ratio;
/**
* Presentation timestamp in time_base units (time when frame should be shown to user).
*/
int64_t pts;
#if FF_API_PKT_PTS
/**
* PTS copied from the AVPacket that was decoded to produce this frame.
* @deprecated use the pts field instead
*/
attribute_deprecated
int64_t pkt_pts;
#endif
/**
* DTS copied from the AVPacket that triggered returning this frame. (if frame threading isn't used)
* This is also the Presentation time of this AVFrame calculated from
* only AVPacket.dts values without pts values.
*/
int64_t pkt_dts;
/**
* picture number in bitstream order
*/
int coded_picture_number;
/**
* picture number in display order
*/
int display_picture_number;
/**
* quality (between 1 (good) and FF_LAMBDA_MAX (bad))
*/
int quality;
/**
* for some private data of the user
*/
void *opaque;
#if FF_API_ERROR_FRAME
/**
* @deprecated unused
*/
attribute_deprecated
uint64_t error[AV_NUM_DATA_POINTERS];
#endif
/**
* When decoding, this signals how much the picture must be delayed.
* extra_delay = repeat_pict / (2*fps)
*/
int repeat_pict;
/**
* The content of the picture is interlaced.
*/
int interlaced_frame;
/**
* If the content is interlaced, is top field displayed first.
*/
int top_field_first;
/**
* Tell user application that palette has changed from previous frame.
*/
int palette_has_changed;
/**
* reordered opaque 64 bits (generally an integer or a double precision float
* PTS but can be anything).
* The user sets AVCodecContext.reordered_opaque to represent the input at
* that time,
* the decoder reorders values as needed and sets AVFrame.reordered_opaque
* to exactly one of the values provided by the user through AVCodecContext.reordered_opaque
*/
int64_t reordered_opaque;
/**
* Sample rate of the audio data.
*/
int sample_rate;
/**
* Channel layout of the audio data.
*/
uint64_t channel_layout;
/**
* AVBuffer references backing the data for this frame. If all elements of
* this array are NULL, then this frame is not reference counted. This array
* must be filled contiguously -- if buf[i] is non-NULL then buf[j] must
* also be non-NULL for all j < i.
*
* There may be at most one AVBuffer per data plane, so for video this array
* always contains all the references. For planar audio with more than
* AV_NUM_DATA_POINTERS channels, there may be more buffers than can fit in
* this array. Then the extra AVBufferRef pointers are stored in the
* extended_buf array.
*/
AVBufferRef *buf[AV_NUM_DATA_POINTERS];
/**
* For planar audio which requires more than AV_NUM_DATA_POINTERS
* AVBufferRef pointers, this array will hold all the references which
* cannot fit into AVFrame.buf.
*
* Note that this is different from AVFrame.extended_data, which always
* contains all the pointers. This array only contains the extra pointers,
* which cannot fit into AVFrame.buf.
*
* This array is always allocated using av_malloc() by whoever constructs
* the frame. It is freed in av_frame_unref().
*/
AVBufferRef **extended_buf;
/**
* Number of elements in extended_buf.
*/
int nb_extended_buf;
AVFrameSideData **side_data;
int nb_side_data;
/**
* @defgroup lavu_frame_flags AV_FRAME_FLAGS
* @ingroup lavu_frame
* Flags describing additional frame properties.
*
* @{
*/
/**
* The frame data may be corrupted, e.g. due to decoding errors.
*/
#define AV_FRAME_FLAG_CORRUPT (1 << 0)
/**
* A flag to mark the frames which need to be decoded, but shouldn't be output.
*/
#define AV_FRAME_FLAG_DISCARD (1 << 2)
/**
* @}
*/
/**
* Frame flags, a combination of @ref lavu_frame_flags
*/
int flags;
/**
* MPEG vs JPEG YUV range.
* - encoding: Set by user
* - decoding: Set by libavcodec
*/
enum AVColorRange color_range;
enum AVColorPrimaries color_primaries;
enum AVColorTransferCharacteristic color_trc;
/**
* YUV colorspace type.
* - encoding: Set by user
* - decoding: Set by libavcodec
*/
enum AVColorSpace colorspace;
enum AVChromaLocation chroma_location;
/**
* frame timestamp estimated using various heuristics, in stream time base
* - encoding: unused
* - decoding: set by libavcodec, read by user.
*/
int64_t best_effort_timestamp;
/**
* reordered pos from the last AVPacket that has been input into the decoder
* - encoding: unused
* - decoding: Read by user.
*/
int64_t pkt_pos;
/**
* duration of the corresponding packet, expressed in
* AVStream->time_base units, 0 if unknown.
* - encoding: unused
* - decoding: Read by user.
*/
int64_t pkt_duration;
/**
* metadata.
* - encoding: Set by user.
* - decoding: Set by libavcodec.
*/
AVDictionary *metadata;
/**
* decode error flags of the frame, set to a combination of
* FF_DECODE_ERROR_xxx flags if the decoder produced a frame, but there
* were errors during the decoding.
* - encoding: unused
* - decoding: set by libavcodec, read by user.
*/
int decode_error_flags;
#define FF_DECODE_ERROR_INVALID_BITSTREAM 1
#define FF_DECODE_ERROR_MISSING_REFERENCE 2
#define FF_DECODE_ERROR_CONCEALMENT_ACTIVE 4
#define FF_DECODE_ERROR_DECODE_SLICES 8
/**
* number of audio channels, only used for audio.
* - encoding: unused
* - decoding: Read by user.
*/
int channels;
/**
* size of the corresponding packet containing the compressed
* frame.
* It is set to a negative value if unknown.
* - encoding: unused
* - decoding: set by libavcodec, read by user.
*/
int pkt_size;
#if FF_API_FRAME_QP
/**
* QP table
*/
attribute_deprecated
int8_t *qscale_table;
/**
* QP store stride
*/
attribute_deprecated
int qstride;
attribute_deprecated
int qscale_type;
attribute_deprecated
AVBufferRef *qp_table_buf;
#endif
/**
* For hwaccel-format frames, this should be a reference to the
* AVHWFramesContext describing the frame.
*/
AVBufferRef *hw_frames_ctx;
/**
* AVBufferRef for free use by the API user. FFmpeg will never check the
* contents of the buffer ref. FFmpeg calls av_buffer_unref() on it when
* the frame is unreferenced. av_frame_copy_props() calls create a new
* reference with av_buffer_ref() for the target frame's opaque_ref field.
*
* This is unrelated to the opaque field, although it serves a similar
* purpose.
*/
AVBufferRef *opaque_ref;
/**
* @anchor cropping
* @name Cropping
* Video frames only. The number of pixels to discard from the the
* top/bottom/left/right border of the frame to obtain the sub-rectangle of
* the frame intended for presentation.
* @{
*/
size_t crop_top;
size_t crop_bottom;
size_t crop_left;
size_t crop_right;
/**
* @}
*/
/**
* AVBufferRef for internal use by a single libav* library.
* Must not be used to transfer data between libraries.
* Has to be NULL when ownership of the frame leaves the respective library.
*
* Code outside the FFmpeg libs should never check or change the contents of the buffer ref.
*
* FFmpeg calls av_buffer_unref() on it when the frame is unreferenced.
* av_frame_copy_props() calls create a new reference with av_buffer_ref()
* for the target frame's private_ref field.
*/
AVBufferRef *private_ref;
} AVFrame;
#include <iostream>
using namespace std;
extern "C" {
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavutil/avutil.h>
#include <libavutil/opt.h>
#include <libswresample/swresample.h>
}
//引入静态库
#pragma comment(lib,"avcodec.lib")
#pragma comment(lib,"avformat.lib")
#pragma comment(lib,"avutil.lib")
#pragma comment(lib,"swresample.lib")
int main()
{
//av_register_all();
char inputfile[] = "audio.pcm";
char outputfile[] = "audio.mp3";
int ret = 0;
FILE* finput = NULL;
FILE* foutput = NULL;
//1 注册编码器 MP3
AVCodec* codec = avcodec_find_encoder(AV_CODEC_ID_MP3);
if (!codec)
{
cout << "avcodec_find_encoder failed" << endl;
return -1;
}
//2 创建编码器上下文
AVCodecContext* ctx = NULL;
ctx = avcodec_alloc_context3(codec);//分配空间
if (!ctx)
{
cout << "avcodec_alloc_context3 failed" << endl;
return -1;
}
ctx->bit_rate = 64000;
ctx->channels = 2;
ctx->channel_layout = AV_CH_LAYOUT_STEREO;
ctx->sample_rate = 44100;
ctx->sample_fmt = AV_SAMPLE_FMT_S16P;//mp3 只支持s16平面格式
//3 打开编码器 上下文 编码
//c语言中返回0表示成功
ret = avcodec_open2(ctx, codec, 0);
if (ret < 0) {
cout << "avcodec_open2 failed" << endl;
return -1;
}
//4 打开输出文件
foutput = fopen(outputfile, "wb");
if (!foutput)
{
cout << "avcodec_open2 failed" << endl;
return -1;
}
//5 AVFrame 接受重采样的每一帧的音频数据 每帧的样本大小为1152
AVFrame* frame;
frame = av_frame_alloc();
if (!frame)
{
cout << "av_frame_alloc failed" << endl;
return -1;
}
frame->nb_samples = 1152;//MP3 一帧的样本数量为1152
frame->channels = 2;
frame->channel_layout = AV_CH_LAYOUT_STEREO;
frame->format = AV_SAMPLE_FMT_S16P;
ret = av_frame_get_buffer(frame, 0);//分配空间
if (ret < 0)
{
cout << "av_frame_get_buffer failed" << endl;
return -1;
}
//6 重采样 创建音频重采样上下文
SwrContext* swr = swr_alloc();
if (!swr)
{
cout << "swr_alloc failed" << endl;
return -1;
}
//设置重采样输入pcm参数:通道布局:立体声 采样率:44100 样本格式 s16交错存储
av_opt_set_int(swr, "in_channel_layout", AV_CH_LAYOUT_STEREO, 0);
av_opt_set_int(swr, "in_sample_rate", 44100, 0);
av_opt_set_sample_fmt(swr, "in_sample_fmt", AV_SAMPLE_FMT_S16, 0);//pcm 样本格式
//设置重采样输出mp3参数:通道布局:立体声 采样率:44100 样本格式 s16平面存储
av_opt_set_int(swr, "out_channel_layout", AV_CH_LAYOUT_STEREO, 0);
av_opt_set_int(swr, "out_sample_rate", 44100, 0);
av_opt_set_sample_fmt(swr, "out_sample_fmt", AV_SAMPLE_FMT_S16P, 0);
//初始化
ret = swr_init(swr);
if (ret < 0) {
cout << "swr_init failed" << endl;
return -1;
}
//打开输入文件
finput = fopen(inputfile, "rb");
if (!finput)
{
cout << "fopen inputfile failed" << endl;
return -1;
}
//
uint8_t** input_data = NULL;
//
uint8_t * *output_data = NULL;
int input_linesize, output_linesize;
//给pcm文件数据分配空间
ret = av_samples_alloc_array_and_samples(&input_data, &input_linesize, 2, 1152, AV_SAMPLE_FMT_S16, 0);
if (ret < 0) {
cout << "av_samples_alloc_array_and_samples input failed" << endl;
return -1;
}
//缓存重采样数据的空间分配
ret = av_samples_alloc_array_and_samples(&output_data, &output_linesize, 2, 1152, AV_SAMPLE_FMT_S16P, 0);
if (ret < 0) {
cout << "av_samples_alloc_array_and_samples out failed" << endl;
return -1;
}
//存放编码后的数据
AVPacket* pkt = av_packet_alloc();
if (!pkt)
{
cout << "av_packet_alloc failed" << endl;
return -1;
}
while (!feof(finput))
{
int readsize = fread(input_data[0], 1, 1152 * 2 * 2, finput);
if (!readsize) {
break;
}
cout << readsize << endl;
//重采样
ret = swr_convert(swr, output_data, 1152, (const uint8_t**)input_data, 1152);
if (ret < 0)
{
cout << "swr_convert failed" << endl;
return -1;
}
//将重采样后的数据存入frame
//MP3是s16p 先存放左声道的数据 后存放右声道的数据, data[0]是左声道,1是右声道
frame->data[0] = output_data[0];
frame->data[1] = output_data[1];
//编码,写入mp3文件,实际上是对frame这个结构体里面的数据进行编码操作
//发送到编码线程:使用编码器 和 存储数据的frame
ret = avcodec_send_frame(ctx, frame);
if (ret < 0)
{
cout << "avcodec_send_frame failed" << endl;
return -1;
}
while (ret >= 0) {
//接收编码后的数据,使用编码器 和 存储编码数据的pkt
ret = avcodec_receive_packet(ctx, pkt);//有可能需要多次才能接收完成
//AVERROR(EEAGAIN) -11 AVERROR_EOF表示没有数据了 这两个错误不影响继续接收数据
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
{
continue;
}
else if (ret < 0) {
break;
}
fwrite(pkt->data, 1, pkt->size, foutput);
av_packet_unref(pkt);//释放pkt空间,否则内存泄露
}
}
//关闭缓存
if (input_data)
{
av_freep(input_data);
}
if (output_data)
{
av_freep(output_data);
}
//关闭文件
fclose(finput);
fclose(foutput);
//s释放 frame pkt
av_frame_free(&frame);
av_packet_free(&pkt);
//释放重采样上下文
swr_free(&swr);
//释放编码器上下文
avcodec_free_context(&ctx);
return 0;
}