0.前言
Qt 的 multimedia 多媒体模块提供了一些基本的音视频接口,虽然不带编解码器,不过对于录音来说,能处理 PCM 数据就够了。实现中,使用 QAudioDeviceInfo 获取输入输出设备信息,使用 QAudioInput 获取音频输入数据,使用 QAudioOutput 播放音频数据。
本文代码效果及仓库链接如下:
github 链接(SimpleAudioRecorder 类):https://github.com/gongjianbo/QmlAudioView
1.实现细节
先明确主要的需求:
- 录制音频
- 播放录制的音频
- 保存和加载录制的音频文件
- 波形绘制
1.1录制
对于录制,可以使用 QAudioInput 类,在 Qt 文档中有基本使用示例:
QFile destinationFile; // Class member
QAudioInput* audio; // Class member
{
destinationFile.setFileName("/tmp/test.raw");
destinationFile.open( QIODevice::WriteOnly | QIODevice::Truncate );
QAudioFormat format;
// Set up the desired format, for example:
format.setSampleRate(8000);
format.setChannelCount(1);
format.setSampleSize(8);
format.setCodec("audio/pcm");
format.setByteOrder(QAudioFormat::LittleEndian);
format.setSampleType(QAudioFormat::UnSignedInt);
QAudioDeviceInfo info = QAudioDeviceInfo::defaultInputDevice();
if (!info.isFormatSupported(format)) {
qWarning() << "Default format not supported, trying to use the nearest.";
format = info.nearestFormat(format);
}
audio = new QAudioInput(format, this);
connect(audio, SIGNAL(stateChanged(QAudio::State)), this, SLOT(handleStateChanged(QAudio::State)));
QTimer::singleShot(3000, this, SLOT(stopRecording()));
audio->start(&destinationFile);
// Records audio for 3000ms
}
void AudioInputExample::stopRecording()
{
audio->stop();
destinationFile.close();
delete audio;
}
这段示例展示了录制并保存到文件的操作,要说明的是可能会有多个输入设备,使用 QAudioDeviceInfo::defaultInputDevice() 不一定是我们想要的那个输入设备。这时候可以做个下拉框,从 QAudioDeviceInfo::availableDevices(QAudio::AudioInput) 返回的设备列表中选,但这接口还有个问题是,获取到的列表可能有重复项,因为 Qt 有多个插件,设备名虽然相同但是支持的参数不一样,参照 (Qt4)QTBUG-16841(Qt5)QTBUG-75781。还有个问题就是,设备插拔之后,如果设备增加或者减少了,那么再用之前的 QAudioDeviceInfo 去构造 QAudioInput 可能会无法使用。
本文代码只作为一个简单的实现,所以使用的 defaultInputDevice()。
数据还需要一个 QIODevice 子类来接收(如示例中的 QFile),一般可以继承后在 writeData 接口里处理输入数据。
Qt 音频输入有 push/pull 推拉两种模式,通过 start 接口来区分:
void QAudioInput::start(QIODevice *device); //pull
QIODevice* QAudioInput::start(); //push
pull 模式下,框架自动回调 QIODevice 的 writeData 接口写入数据;push 模式下,需要通过 start 返回的 QIODevice 主动进行数据读取(参照示例 audio input)。
1.2播放
播放使用 QAudioOutput 类实现,其接口和 QAudioInput 类似,在文档中也有示例:
QFile sourceFile; // class member.
QAudioOutput* audio; // class member.
{
sourceFile.setFileName("/tmp/test.raw");
sourceFile.open(QIODevice::ReadOnly);
QAudioFormat format;
// Set up the format, eg.
format.setSampleRate(8000);
format.setChannelCount(1);
format.setSampleSize(8);
format.setCodec("audio/pcm");
format.setByteOrder(QAudioFormat::LittleEndian);
format.setSampleType(QAudioFormat::UnSignedInt);
QAudioDeviceInfo info(QAudioDeviceInfo::defaultOutputDevice());
if (!info.isFormatSupported(format)) {
qWarning() << "Raw audio format not supported by backend, cannot play audio.";
return;
}
audio = new QAudioOutput(format, this);
connect(audio, SIGNAL(stateChanged(QAudio::State)), this, SLOT(handleStateChanged(QAudio::State)));
audio->start(&sourceFile);
}
这个示例就是读取文件进行播放,输出设备要注意的事项也和输入设备差不多。
Qt 音频输出有 push/pull 推拉两种模式,通过 start 接口来区分:
void QAudioOutput::start(QIODevice *device); //pull
QIODevice* QAudioOutput::start(); //push
pull 模式下,框架自动回调 QIODevice 的 readData 接口读取数据;push 模式下,需要通过 start 返回的 QIODevice 主动进行数据写入(参照示例 audio output)。
pull 模式播放有个问题就是不知道播放进度,可以把 QAudioOutput 的 notifyInterval 间隔设置小一点,然后关联 notify 信号,在槽函数中使用 processedUSecs() 接口获取播放的时长,注意这个接口返回的是播放的微秒数。
1.3其他
数据的保存用的 QByteArray,在 Qt5 中该容器有 2G 字节的限制。
对于绘制,最好先抽样再绘制,一个像素间隔也就只能绘制一个像素的点,多了也是重复操作浪费 CPU 。有时候会忘记 Qt 屏幕坐标系是左上角起点,右下角正方向,多检测下实现效果。
输入输出设备的热插拔检测,如果是 Windows 可以过滤 nativeEvent 的 WM_DEVICECHANGE,详情百度。
如果只是想简单的录制到文件,可以用 QAudioRecorder 类,Qt 示例搜 recorder 有示例。当然,录制实现也可以参照该类的源码。
存在的主要问题:目前使用的 QPainter 在 paintEvent 中绘制,当显示区域较大时会有卡顿感,刷新间隔也不是很均匀。
2.主要代码
#pragma once
#include <QObject>
#include <QAudioFormat>
/**
* @brief wav文件头结构体
* @author 龚建波
* @date 2020-11-12
* @details
* wav头是不定长格式,不过这里用的比较简单的格式,便于处理
* 包含RIFF块、文件格式标志、fmt块、压缩编码时fact块、data块
* (数值以小端存储,不过pc一般默认小端存储,暂不特殊处理)
* 参照:https://www.cnblogs.com/ranson7zop/p/7657874.html
* 参照:https://www.cnblogs.com/Ph-one/p/6839892.html
*/
#pragma pack(push,1)
/// wav格式头的RIFF块
struct AVWavRiffChunk
{
char chunkId[4]; //文档标识,大写"RIFF"
//从下一个字段首地址开始到文件末尾的总字节数。
//该字段的数值加 8 为当前文件的实际长度。
unsigned int chunkSize;
char format[4]; //文件格式标识,大写"WAVE"
};
/// wav格式头的FMT块
struct AVWavFmtChunk
{
char chunkId[4]; //格式块标识,小写"fmt "
//格式块长度,可以是 16、 18 、20、40 等
//值应该是audioFormat到bitsPerSample的字节大小,目前固定为16
unsigned int chunkSize; //格式块长度
unsigned short audioFormat; //编码格式代码,1为pcm
unsigned short numChannels; //通道个数
unsigned int sampleRate; //采样频率
//pcm编码时,该数值为:声道数×采样频率×每样本的数据位数/8。
//播放软件利用此值可以估计缓冲区的大小。
unsigned int byteRate; //码率(数据传输速率)
//采样帧大小。该数值为:声道数×位数/8。
//播放软件需要一次处理多个该值大小的字节数据,用该数值调整缓冲区。
unsigned short blockAlign; //数据块对其单位
//存储每个采样值所用的二进制数位数。常见的位数有16、24、32
unsigned short bitsPerSample; //采样位数/深度/精度
};
/// wav格式头的DATA块
struct AVWavDataChunk
{
char chunkId[4]; //表示数据开头,小写"data"
unsigned int chunkSize; //数据部分的长度
};
/// wave格式头,目前取44字节固定格式
struct AVWavHead
{
AVWavRiffChunk riff;
AVWavFmtChunk fmt;
AVWavDataChunk data;
/// 默认构造,啥也不干
AVWavHead(){}
/**
* @brief 根据读取的文件数据构造wav格式头
* 一般用在读文件时,构造后使用isValid()判断有效性
* @param audioData 音频格式头数据,目前仅支持44字节格式头
*/
AVWavHead(const QByteArray &audioData);
/**
* @brief 根据采样率、精度等参数构造wav格式头
* 一般在写文件时根据参数调用此构造
* @param sampleRate 采样率,如16000Hz
* @param sampleSize 采样精度,如16位
* @param channelCount 声道数,如1单声道
* @param dataSize 有效数据字节数
*/
AVWavHead(int sampleRate, int sampleSize,
int channelCount, unsigned int dataSize);
/**
* @brief 判断该wav头参数是否有效
* 主要用在读取并解析使用该头格式写的文件
* @return =true则格式有效
*/
bool isValid() const;
};
#pragma pack(pop)
#include "AVWavDefine.h"
#include <memory>
const char *RIFF_FLAG = "RIFF";
const char *WAVE_FLAG = "WAVE";
const char *FMT_FLAG = "fmt ";
const char *DATA_FLAG = "data";
AVWavHead::AVWavHead(const QByteArray &audioData)
{
if(audioData.size() < sizeof(AVWavHead))
return;
memcpy(this, audioData.constData(), sizeof(AVWavHead));
}
AVWavHead::AVWavHead(int sampleRate, int sampleSize,
int channelCount, unsigned int dataSize)
{
//如果长度不能被采样位宽整除,就去掉x个字节的数据
//if(dataSize%sampleByte!=0){
// dataSize-=xByte;
//}
//先清零再赋值
memset(this, 0, sizeof(AVWavHead));
memcpy(riff.chunkId, RIFF_FLAG, 4);
memcpy(riff.format, WAVE_FLAG, 4);
memcpy(fmt.chunkId, FMT_FLAG, 4);
memcpy(data.chunkId, DATA_FLAG, 4);
//除去头部前8个字节的长度,用的44字节的定长格式头,所以+44-8=36
riff.chunkSize = dataSize + 36;
//fmt块大小
fmt.chunkSize = 16;
//1为pcm
fmt.audioFormat = 0x01;
fmt.numChannels = channelCount;
fmt.sampleRate = sampleRate;
fmt.byteRate = (sampleSize / 8) * channelCount * sampleRate;
fmt.blockAlign = (sampleSize / 8) * channelCount;
fmt.bitsPerSample = sampleSize;
//除去头的数据长度
data.chunkSize = dataSize;
}
bool AVWavHead::isValid() const
{
//简单的比较,主要用在未使用解析器时解析wav头
if (memcmp(riff.chunkId, RIFF_FLAG, 4) != 0 ||
memcmp(riff.format, WAVE_FLAG, 4) != 0 ||
memcmp(fmt.chunkId, FMT_FLAG, 4) != 0 ||
memcmp(data.chunkId, DATA_FLAG, 4) != 0 ||
riff.chunkSize != data.chunkSize + 36 ||
fmt.audioFormat != 0x01)
return false;
return true;
}
#pragma once
#include <QQuickPaintedItem>
#include <QIODevice>
#include <QAudioDeviceInfo>
#include <QAudioFormat>
#include <QAudioInput>
#include <QAudioOutput>
#include <QTimer>
#include <QPainterPath>
// wav格式头的定义,在读写文件时用到
#include "AudioViewComponent/Common/AVWavDefine.h"
class SimpleAudioRecorder;
/**
* @brief 用于读写数据时进行回调
* QAudioInput/Output需要QIODevice类来读写数据
* @author 龚建波
* @date 2021-12-10
* @history
* 2021-12-29 修复播放游标的偏移问题,修复播放reset问题
*/
class SimpleAudioDevice : public QIODevice
{
public:
explicit SimpleAudioDevice(SimpleAudioRecorder *recorder, QObject *parent);
qint64 readData(char *data, qint64 maxSize) override;
qint64 writeData(const char *data, qint64 maxSize) override;
private:
SimpleAudioRecorder *recorderPtr;
};
/**
* @brief 简易的音频录制组件,没有对各模块进行封装
* @author 龚建波
* @date 2021-12-10
* @details
* 建议版本,录音格式固定,输入输出设备也是用默认的
* 16K采样、16位精度、单声道
*/
class SimpleAudioRecorder : public QQuickPaintedItem
{
Q_OBJECT
Q_PROPERTY(SimpleAudioRecorder::RecorderState workState READ getWorkState NOTIFY workStateChanged)
Q_PROPERTY(bool hasData READ getHasData NOTIFY hasDataChanged)
Q_PROPERTY(qint64 duration READ getDuration NOTIFY durationChanged)
Q_PROPERTY(QString durationString READ getDurationString NOTIFY durationChanged)
Q_PROPERTY(qint64 position READ getPosition NOTIFY positionChanged)
Q_PROPERTY(QString positionString READ getPositionString NOTIFY positionChanged)
public:
//状态
enum RecorderState
{
Stop, //默认停止状态
Playing, //播放
PlayPause, //播放暂停
Recording, //录制
RecordPause //录制暂停
};
Q_ENUM(RecorderState)
public:
explicit SimpleAudioRecorder(QQuickItem *parent = nullptr);
~SimpleAudioRecorder();
//当前状态
SimpleAudioRecorder::RecorderState getWorkState() const;
void setWorkState(SimpleAudioRecorder::RecorderState state);
//当前是否有数据
bool getHasData() const;
void setHasData(bool has);
//当前数据的总时长ms
qint64 getDuration() const;
void updateDuration();
//将duration毫秒数转为时分秒格式
QString getDurationString() const;
//当前播放或者录制的时间ms
qint64 getPosition() const;
void updatePosition();
QString getPositionString() const;
//录制/播放时的回调接口
qint64 readData(char *data, qint64 maxSize);
qint64 writeData(const char *data, qint64 maxSize);
protected:
void paint(QPainter *painter) override;
void geometryChanged(const QRectF &newGeometry, const QRectF &oldGeometry) override;
private:
//更新绘制路径
void updateSamplePath();
signals:
void workStateChanged();
void hasDataChanged();
void durationChanged();
void positionChanged();
public slots:
//刷新ui
void refresh();
//停止播放/录制
void stop();
//播放
void play();
//播放暂停
void playPause();
//播放继续
void playResume();
//录制
void record();
//录制暂停
void recordPause();
//录制继续
void recordResume();
//保存数据到文件
void saveFile(const QString &filepath);
//从文件加载数据
void loadFile(const QString &filepath);
private:
//音频输入
QAudioInput *audioInput{nullptr};
//音频输出
QAudioOutput *audioOutput{nullptr};
// QAudioInput/Output需要QIODevice类来读写数据
SimpleAudioDevice *audioDevice{nullptr};
//音频格式,目前为固定
QAudioFormat audioFormat;
//当前状态
SimpleAudioRecorder::RecorderState workState{SimpleAudioRecorder::Stop};
//输出数据计数
qint64 outputCount{0};
//播放数据计数
qint64 playCount{0};
//数据缓冲区
QByteArray audioData;
bool hasData{false};
//抽样数据连城的路径
QPainterPath samplePath;
//数据时长ms
qint64 audioDuration{0};
//播放或者录制时长ms
qint64 audioPostion{0};
//四个边距
//该版本刻度是一体的,所以刻度的宽高也算在padding里
int leftPadding{55};
int rightPadding{15};
int topPadding{15};
int bottomPadding{15};
};
#include "SimpleAudioRecorder.h"
#include <cmath>
#include <QtMath>
#include <QTime>
#include <QFileInfo>
#include <QFile>
#include <QDir>
#include <QPainter>
#include <QFontMetrics>
#include <QDebug>
SimpleAudioDevice::SimpleAudioDevice(SimpleAudioRecorder *recorder, QObject *parent)
: QIODevice(parent), recorderPtr(recorder)
{
Q_ASSERT(recorderPtr != nullptr);
}
qint64 SimpleAudioDevice::readData(char *data, qint64 maxSize)
{
return recorderPtr->readData(data, maxSize);
}
qint64 SimpleAudioDevice::writeData(const char *data, qint64 maxSize)
{
return recorderPtr->writeData(data, maxSize);
}
SimpleAudioRecorder::SimpleAudioRecorder(QQuickItem *parent)
: QQuickPaintedItem(parent)
{
//作为QAudioInput/Output的构造参数,输入输出时回调write/write接口
audioDevice = new SimpleAudioDevice(this, this);
audioDevice->open(QIODevice::ReadWrite);
//采样精度和声道数暂时默认
audioFormat.setSampleRate(16000);
audioFormat.setChannelCount(1);
audioFormat.setSampleSize(16);
audioFormat.setCodec("audio/pcm");
audioFormat.setByteOrder(QAudioFormat::LittleEndian);
audioFormat.setSampleType(QAudioFormat::SignedInt);
}
SimpleAudioRecorder::~SimpleAudioRecorder()
{
stop();
audioDevice->close();
}
SimpleAudioRecorder::RecorderState SimpleAudioRecorder::getWorkState() const
{
return workState;
}
void SimpleAudioRecorder::setWorkState(RecorderState state)
{
if (workState != state)
{
workState = state;
emit workStateChanged();
}
}
bool SimpleAudioRecorder::getHasData() const
{
return hasData;
}
void SimpleAudioRecorder::setHasData(bool has)
{
if (hasData != has)
{
hasData = has;
emit hasDataChanged();
}
}
qint64 SimpleAudioRecorder::getDuration() const
{
return audioDuration;
}
void SimpleAudioRecorder::updateDuration()
{
//根据音频数据的参数和数据长度进行计算
const int sample_rate = audioFormat.sampleRate();
const int sample_byte = audioFormat.sampleSize() / 8;
const int channel_count = audioFormat.channelCount();
qint64 duration = 0;
if (audioData.size() > 0 && sample_rate > 0 && sample_byte > 0 && channel_count > 0) {
//时长=采样总数/每秒的采样数
//s time*1000=ms time
duration = (audioData.size() / sample_byte) / (1.0 * channel_count * sample_rate) * 1000;
}
if (audioDuration != duration) {
audioDuration = duration;
emit durationChanged();
}
}
QString SimpleAudioRecorder::getDurationString() const
{
return QTime(0, 0).addMSecs(audioDuration).toString("hh:mm:ss");
}
qint64 SimpleAudioRecorder::getPosition() const
{
return audioPostion;
}
void SimpleAudioRecorder::updatePosition()
{
if (getWorkState() == Playing || getWorkState() == PlayPause)
{
const int sample_rate = audioFormat.sampleRate();
audioPostion = ((playCount / 2) / (1.0 * sample_rate) * 1000);
}
else
{
//未播放时positon为0
audioPostion = 0; // getDuration();
}
emit positionChanged();
}
QString SimpleAudioRecorder::getPositionString() const
{
return QTime(0, 0).addMSecs(audioPostion).toString("hh:mm:ss");
}
qint64 SimpleAudioRecorder::readData(char *data, qint64 maxSize)
{
if (!data || maxSize < 1)
return 0;
//如果是选区播放,可以将截至位置减去播放位置
const int data_size = audioData.count() - outputCount;
if (data_size <= 0)
{
//qDebug()<<__FUNCTION__<<"finish";
/// stateChanged没有触发停止,但return 0 会触发 IdleState 状态
/// 现在改为通过IdleState状态判断结束
// 定时时间大于notifyInterval,使播放完整
//const int sample_rate = audioFormat.sampleRate();
// 时长=采样总数/每秒的采样数
// s time*1000=ms time
//qint64 duration = (audioOutput->bufferSize() / 2) / (1.0 * sample_rate) * 1000;
// 这里播放结束时会进入多次,所以需要保存一个标志,使只定时一次
//QTimer::singleShot(duration + 30, [this]{ stop(); });
return 0;
}
const int read_size = (data_size >= maxSize) ? maxSize : data_size;
memcpy(data, audioData.constData() + outputCount, read_size);
outputCount += read_size;
// refresh(); 这个回调间隔太大了,不适合用来刷新
return read_size;
}
qint64 SimpleAudioRecorder::writeData(const char *data, qint64 maxSize)
{
//默认为单声道,16bit
audioData.append(data, maxSize);
setHasData(!audioData.isEmpty());
updateSamplePath();
updateDuration();
updatePosition();
refresh(); //新的数据到来就刷新绘制
return maxSize;
}
void SimpleAudioRecorder::paint(QPainter *painter)
{
// series区域的宽高
const int view_width = (width() - leftPadding - rightPadding);
const int view_height = (height() - topPadding - bottomPadding);
// series零点坐标
const int wave_x = leftPadding;
const int wave_y = view_height / 2 + topPadding;
//背景色
painter->setPen(Qt::NoPen);
// painter->setRenderHint(QPainter::Antialiasing,true);
painter->setBrush(QColor(34, 34, 34));
painter->drawRect(0, 0, width(), height());
// painter->setRenderHint(QPainter::Antialiasing,false);
painter->setBrush(Qt::NoBrush);
painter->fillRect(leftPadding, rightPadding, view_width, view_height, QColor(10, 10, 10));
//网格,等分的横线,中间为红色
painter->translate(wave_x, wave_y);
painter->setPen(QColor(200, 10, 10));
painter->drawLine(0, 0, view_width, 0);
int y_px = 0;
painter->setPen(QColor(50, 50, 50));
for (int i = 1; i <= 3; i++)
{
y_px = i * view_height / 2 / 3;
painter->drawLine(0, y_px, view_width, y_px);
painter->drawLine(0, -y_px, view_width, -y_px);
}
painter->translate(-wave_x, -wave_y);
//有数据时才绘制曲线
if (!audioData.isEmpty())
{
//绘制波形
painter->setPen(QColor(67, 217, 150));
painter->translate(wave_x, wave_y);
painter->drawPath(samplePath);
painter->translate(-wave_x, -wave_y);
//画游标
painter->setPen(QColor(200, 10, 10));
const int play_pos = double(playCount) / audioData.count() * view_width + leftPadding + 1;
painter->drawLine(play_pos, topPadding,
play_pos, height() - bottomPadding);
}
//纵轴幅度
painter->translate(wave_x, wave_y);
QString y_text;
painter->setPen(QColor(200, 200, 200));
painter->drawText(-5 - painter->fontMetrics().horizontalAdvance("0"),
painter->fontMetrics().height() / 2.5, "0");
for (int i = 1; i <= 3; i++)
{
//取反是因为Qt屏幕坐标系是左上角为0,右下角正方向
y_px = -i * view_height / 2 / 3;
y_text = QString::number(i * 1200);
painter->drawText(-5 - painter->fontMetrics().horizontalAdvance(y_text),
y_px + painter->fontMetrics().height() / 2.5,
y_text);
y_text = QString::number(-i * 1200);
painter->drawText(-5 - painter->fontMetrics().horizontalAdvance(y_text),
-y_px + painter->fontMetrics().height() / 2.5,
y_text);
}
painter->translate(-wave_x, -wave_y);
painter->setPen(QColor(200, 200, 200));
painter->drawLine(leftPadding, topPadding, leftPadding, topPadding + view_height);
//横轴时间,略
painter->drawLine(leftPadding, topPadding + view_height,
leftPadding + view_width, topPadding + view_height);
}
void SimpleAudioRecorder::geometryChanged(const QRectF &newGeometry, const QRectF &oldGeometry)
{
QQuickPaintedItem::geometryChanged(newGeometry, oldGeometry);
updateSamplePath();
refresh();
}
void SimpleAudioRecorder::updateSamplePath()
{
samplePath = QPainterPath();
const int data_count = audioData.count();
if (data_count < 2 && data_count % 2 != 0)
return;
//根据模式切换显示的数据范围,暂时固定值
// s*channel*sampleRate
int data_show = data_count / 2;
if (getWorkState() == Recording || getWorkState() == RecordPause)
{
const int max_show = 5 * 1 * audioFormat.sampleRate();
if (data_show > max_show)
data_show = max_show;
}
if (data_count < data_show * 2)
return;
const int sample_count = data_show;
const short *data_ptr = (const short *)audioData.constData() + (data_count / 2 - data_show);
//每一段多少采样点
//除以2是因为太稀疏了,和audition看起来不一样
int x_step = std::ceil(sample_count / (double)width()) / 2;
if (x_step < 1)
x_step = 1;
else if (x_step > sample_count)
x_step = sample_count;
//坐标轴轴适应
const double x_scale = (width() - leftPadding - rightPadding) / (double)sample_count;
//取反是因为Qt屏幕坐标系是左上角为0,右下角正方向
const double y_scale = -(height() - topPadding - bottomPadding) / (double)0x10000;
short cur_max = 0;
short cur_min = 0;
int index_max = 0;
int index_min = 0;
samplePath.moveTo(0, data_ptr[0] * y_scale);
//分段找最大最小作为该段的抽样点
for (int i = 0; i < sample_count; i += x_step)
{
cur_max = data_ptr[i];
cur_min = data_ptr[i];
index_max = i;
index_min = i;
for (int j = i; j < i + x_step && j < sample_count; j++)
{
//遍历找这一段的最大最小值
if (cur_max < data_ptr[j])
{
cur_max = data_ptr[j];
index_max = j;
}
if (cur_min > data_ptr[j])
{
cur_min = data_ptr[j];
index_min = j;
}
}
QPointF pt_min{index_min * x_scale, cur_min * y_scale};
QPointF pt_max{index_max * x_scale, cur_max * y_scale};
//根据先后顺序存最大最小,相等就存一个
if (index_max < index_min)
{
samplePath.lineTo(pt_max);
}
samplePath.lineTo(pt_min);
if (index_max > index_min)
{
samplePath.lineTo(pt_max);
}
}
}
void SimpleAudioRecorder::refresh()
{
update();
}
void SimpleAudioRecorder::stop()
{
//录制、播放时都会调用stop,所以把一些状态重置放这里
//(停止的时候audioData的数据保留,在start时才清空)
outputCount = 0;
playCount = 0;
switch (getWorkState())
{
case Stop:
break;
case Playing:
case PlayPause:
if (audioOutput) {
audioOutput->stop();
}
break;
case Recording:
case RecordPause:
if (audioInput) {
audioInput->stop();
}
break;
default:
break;
}
setWorkState(Stop);
updateSamplePath();
updatePosition();
refresh();
}
void SimpleAudioRecorder::play()
{
//暂停继续
if (getWorkState() == PlayPause)
{
playResume();
return;
}
stop();
if (audioData.isEmpty())
return;
if (!audioOutput)
{
//使用默认的输出设备,即系统当前设置的默认设备
audioOutput = new QAudioOutput(QAudioDeviceInfo::defaultOutputDevice(), audioFormat, this);
connect(audioOutput, &QAudioOutput::stateChanged, this, [this](QAudio::State state)
{
//没有音频数据可供处理时触发IdleState状态
if (state == QAudio::IdleState) {
stop();
}
});
connect(audioOutput, &QAudioOutput::notify, this, [this]()
{
if (getDuration() > 0) {
//用processedUSecs获取start到当前的us数,但是start后有点延迟
//进度=已放时间和总时间之比*总字节数,注意时间单位
playCount = (audioOutput->processedUSecs() / 1000.0) /
audioDuration * audioData.count();
if (playCount > outputCount) {
playCount = outputCount;
}
//减temp_offset是为了补偿缓冲区还未播放的时差,音画同步
int temp_offset = (audioOutput->bufferSize() - audioOutput->bytesFree());
if (temp_offset < 0) {
temp_offset = 0;
}
playCount -= temp_offset;
if (playCount < 0) {
playCount = 0;
}
updatePosition();
refresh();
}
});
//目前用notify来控制进度刷新
audioOutput->setNotifyInterval(30);
}
//之前写audioOutput->reset(),多次播放会遇到notify卡顿
audioDevice->reset();
audioOutput->start(audioDevice);
//切换为录制状态
setWorkState(Playing);
}
void SimpleAudioRecorder::playPause()
{
if (getWorkState() != Playing)
return;
if (audioOutput)
audioOutput->suspend();
setWorkState(PlayPause);
}
void SimpleAudioRecorder::playResume()
{
if (getWorkState() != PlayPause)
return;
if (audioOutput)
audioOutput->resume();
setWorkState(Playing);
}
void SimpleAudioRecorder::record()
{
//暂停继续
if (getWorkState() == RecordPause)
{
recordResume();
return;
}
stop();
//录制时清空数据缓存
audioData.clear();
setHasData(false);
samplePath = QPainterPath();
if (!audioInput)
{
//使用默认的输入设备,即系统当前设置的默认设备
audioInput = new QAudioInput(QAudioDeviceInfo::defaultInputDevice(), audioFormat, this);
connect(audioInput, &QAudioInput::stateChanged, this, []() {});
connect(audioInput, &QAudioInput::notify, this, []() {});
}
audioDevice->reset();
audioInput->start(audioDevice);
//切换为录制状态
setWorkState(Recording);
}
void SimpleAudioRecorder::recordPause()
{
if (getWorkState() != Recording)
return;
if (audioInput)
audioInput->suspend();
setWorkState(RecordPause);
}
void SimpleAudioRecorder::recordResume()
{
if (getWorkState() != RecordPause)
return;
if (audioInput)
audioInput->resume();
setWorkState(Recording);
}
void SimpleAudioRecorder::saveFile(const QString &filepath)
{
qDebug() << __FUNCTION__ << filepath;
stop();
if (audioData.isEmpty())
return;
// qfile不能生成目录
QFileInfo info(filepath);
if (!info.dir().exists())
info.dir().mkpath(info.absolutePath());
QFile file(filepath);
if (file.open(QIODevice::WriteOnly))
{
//暂时全部写入
AVWavHead head(audioFormat.sampleRate(), audioFormat.sampleSize(),
audioFormat.channelCount(), audioData.size());
file.write((const char *)(&head), sizeof(AVWavHead));
file.write(audioData);
file.close();
}
}
void SimpleAudioRecorder::loadFile(const QString &filepath)
{
qDebug() << __FUNCTION__ << filepath;
stop();
//加载时清空数据缓存
audioData.clear();
setHasData(false);
samplePath = QPainterPath();
QFile file(filepath);
if (file.exists() && file.size() > 44 &&
file.open(QIODevice::ReadOnly))
{
AVWavHead head;
file.read((char *)&head, 44);
QByteArray pcm_data;
if (head.isValid())
{
//暂时为全部读取
pcm_data = file.readAll();
file.close();
}
//采样率等置为相同参数
if (pcm_data.count() > 0 && pcm_data.count() % 2 == 0 &&
head.fmt.sampleRate == audioFormat.sampleRate() &&
head.fmt.bitsPerSample == audioFormat.sampleSize() &&
head.fmt.numChannels == audioFormat.channelCount())
{
audioData = pcm_data;
setHasData(!audioData.isEmpty());
updateSamplePath();
updateDuration();
updatePosition();
refresh();
}
}
}