最近做一个关于录屏的功能
注: 前面一段是查看源码,证明无法录制内置音,和自己踩坑的过程。
下面有CV大法拿过来直接可以使用的录屏代码,心急的朋友可以直接略过 分割线以上内容
首先:目前没有发现可以录制内置音的的方案,因为是系统全选,不提供三方APP使用
首先看源码
public AudioRecord(int audioSource, int sampleRateInHz, int channelConfig, int audioFormat,
int bufferSizeInBytes)
throws IllegalArgumentException {
this((new AudioAttributes.Builder())
.setInternalCapturePreset(audioSource)
.build(),
(new AudioFormat.Builder())
.setChannelMask(getChannelMaskFromLegacyConfig(channelConfig,
true/*allow legacy configurations*/))
.setEncoding(audioFormat)
.setSampleRate(sampleRateInHz)
.build(),
bufferSizeInBytes,
AudioManager.AUDIO_SESSION_ID_GENERATE);
}
- audioSource 这个参数指的是音频采集的输入源,接受的值定义在MediaRecorder.AudioSource里面,一般来说使用DEFAULT或者MIC即可。
- sampleRateInHz 指定采集音频的采样频率,比较通用的是44100(44.1kHz),这个值是科学家们通过奈葵斯特采样定理得出的一个人能接受最佳的采样频率值。
- channelConfig 指定AudioRecord采集几个声道的声音,预设值定义在AudioFormat中,常用值有 CHANNEL_CONFIGURATION_MONO(单声道) 和 CHANNEL_CONFIGURATION_STEREO(双声道)。
- audioFormat 指定采样PCM数据的采样格式,预设值定义在也AudioFormat中,常用值有 ENCODING_PCM_8BIT、ENCODING_PCM_16BIT和ENCODING_PCM_FLOAT,值得强调的是ENCODING_PCM_16BIT可以保证兼容大部分Andorid手机。
- bufferSizeInBytes 配置AudioRecord内部的音频数据缓冲区,一般来说缓存区越小,产生的音频延迟也越小;值得注意的是,我们可以利用AudioRecord.getMinBufferSize()这个方法帮我们算出最小的缓存区大小,这个数值最好不要自己计算,毕竟不同厂商可能有不同的缓存区采集实现。
audioSource 这个参数决定 音频采集的输入源,按说应该会区分采集内置音,还是外置音,等等选项,开始看到这个参数,心里就已经有百分八十的把握完成,继续往下看
@SystemApi
public Builder setInternalCapturePreset(int preset) {
if ((preset == MediaRecorder.AudioSource.HOTWORD)
|| (preset == MediaRecorder.AudioSource.REMOTE_SUBMIX)
|| (preset == MediaRecorder.AudioSource.RADIO_TUNER)
|| (preset == MediaRecorder.AudioSource.VOICE_DOWNLINK)
|| (preset == MediaRecorder.AudioSource.VOICE_UPLINK)
|| (preset == MediaRecorder.AudioSource.VOICE_CALL)) {
mSource = preset;
} else {
setCapturePreset(preset);
}
return this;
}
};
比对采集源,这样就可以查到具体采用哪个参数 ,系统注释是
public final class AudioSource {
private AudioSource() {}
/** @hide */
public final static int AUDIO_SOURCE_INVALID = -1;
/* Do not change these values without updating their counterparts
* in system/media/audio/include/system/audio.h!
*/
/** Default audio source **/
public static final int DEFAULT = 0;
/** Microphone audio source */
public static final int MIC = 1;
/** Voice call uplink (Tx) audio source.
* <p>
* Capturing from <code>VOICE_UPLINK</code> source requires the
* {@link android.Manifest.permission#CAPTURE_AUDIO_OUTPUT} permission.
* This permission is reserved for use by system components and is not available to
* third-party applications.
* </p>
*/
public static final int VOICE_UPLINK = 2;
/** Voice call downlink (Rx) audio source.
* <p>
* Capturing from <code>VOICE_DOWNLINK</code> source requires the
* {@link android.Manifest.permission#CAPTURE_AUDIO_OUTPUT} permission.
* This permission is reserved for use by system components and is not available to
* third-party applications.
* </p>
*/
public static final int VOICE_DOWNLINK = 3;
/** Voice call uplink + downlink audio source
* <p>
* Capturing from <code>VOICE_CALL</code> source requires the
* {@link android.Manifest.permission#CAPTURE_AUDIO_OUTPUT} permission.
* This permission is reserved for use by system components and is not available to
* third-party applications.
* </p>
*/
public static final int VOICE_CALL = 4;
/** Microphone audio source tuned for video recording, with the same orientation
* as the camera if available. */
public static final int CAMCORDER = 5;
/** Microphone audio source tuned for voice recognition. */
public static final int VOICE_RECOGNITION = 6;
/** Microphone audio source tuned for voice communications such as VoIP. It
* will for instance take advantage of echo cancellation or automatic gain control
* if available.
*/
public static final int VOICE_COMMUNICATION = 7;
/**
* Audio source for a submix of audio streams to be presented remotely.
* <p>
* An application can use this audio source to capture a mix of audio streams
* that should be transmitted to a remote receiver such as a Wifi display.
* While recording is active, these audio streams are redirected to the remote
* submix instead of being played on the device speaker or headset.
* </p><p>
* Certain streams are excluded from the remote submix, including
* {@link AudioManager#STREAM_RING}, {@link AudioManager#STREAM_ALARM},
* and {@link AudioManager#STREAM_NOTIFICATION}. These streams will continue
* to be presented locally as usual.
* </p><p>
* Capturing the remote submix audio requires the
* {@link android.Manifest.permission#CAPTURE_AUDIO_OUTPUT} permission.
* This permission is reserved for use by system components and is not available to
* third-party applications.
* </p>
*/
@RequiresPermission(android.Manifest.permission.CAPTURE_AUDIO_OUTPUT)
public static final int REMOTE_SUBMIX = 8;
/** Microphone audio source tuned for unprocessed (raw) sound if available, behaves like
* {@link #DEFAULT} otherwise. */
public static final int UNPROCESSED = 9;
/**
* Audio source for capturing broadcast radio tuner output.
* @hide
*/
@SystemApi
public static final int RADIO_TUNER = 1998;
/**
* Audio source for preemptible, low-priority software hotword detection
* It presents the same gain and pre processing tuning as {@link #VOICE_RECOGNITION}.
* <p>
* An application should use this audio source when it wishes to do
* always-on software hotword detection, while gracefully giving in to any other application
* that might want to read from the microphone.
* </p>
* This is a hidden audio source.
* @hide
*/
@SystemApi
@RequiresPermission(android.Manifest.permission.CAPTURE_AUDIO_HOTWORD)
public static final int HOTWORD = 1999;
}
REMOTE_SUBMIX 显然这个参数就是我需要的,但是突然发现需要给申请一个权限 CAPTURE_AUDIO_OUTPUT SystemApi 系统权限。
宣誓完蛋,这条路走不通了,但是特殊开发需求,可以定制系统的完全没问题 。我这样的肯定不行了
不错,我前面哔哔了这么多,贴了这么多源码,就是为了证明,没有系统权限,录制内置音,搞不定,真的搞不定。可能还有其他的方式,知道的大神可以留言给我。感谢分享~
2:另外的方法,我的需求只是想在用户观看战绩回放的时候,加上特别燃的音乐。循环播放。所以,这时候领导想到了MP4视频合上音轨。
https://developer.android.google.cn/reference/kotlin/android/media/MediaMuxer.html
领导给我发过来一个链接,说这个很简单,这是系统API,直接CV就好了,我就点开看看了下。原谅我的英语水平。说实话,看起来真心的坎坷,所以果断放弃了。百度拿轮子
https://www.jb51.net/article/129642.htm
提供了几个方法,我这边记录下,防止帖子以后找不到,没有验证是否可以行,因为在我要实验的时候,领导同志需求更改了。
方法一(Fail)
利用MediaMux实现音视频的合成。
效果:可以实现音视频的合并,利用Android原生的VideoView和SurfaceView播放正常,大部分的播放器也播放正常,但是,但是,在上传Youtube就会出现问题:音频不连续,分析主要是上传Youtube时会被再次的压缩,可能在压缩的过程中出现音频的帧率出现问题。
分析:在MediaCodec.BufferInfo的处理中,时间戳presentationTimeUs出现问题,导致Youtube的压缩造成音频的紊乱。
public static void muxVideoAndAudio(String videoPath, String audioPath, String muxPath) {
try {
MediaExtractor videoExtractor = new MediaExtractor();
videoExtractor.setDataSource(videoPath);
MediaFormat videoFormat = null;
int videoTrackIndex = -1;
int videoTrackCount = videoExtractor.getTrackCount();
for (int i = 0; i < videoTrackCount; i++) {
videoFormat = videoExtractor.getTrackFormat(i);
String mimeType = videoFormat.getString(MediaFormat.KEY_MIME);
if (mimeType.startsWith("video/")) {
videoTrackIndex = i;
break;
}
}
MediaExtractor audioExtractor = new MediaExtractor();
audioExtractor.setDataSource(audioPath);
MediaFormat audioFormat = null;
int audioTrackIndex = -1;
int audioTrackCount = audioExtractor.getTrackCount();
for (int i = 0; i < audioTrackCount; i++) {
audioFormat = audioExtractor.getTrackFormat(i);
String mimeType = audioFormat.getString(MediaFormat.KEY_MIME);
if (mimeType.startsWith("audio/")) {
audioTrackIndex = i;
break;
}
}
videoExtractor.selectTrack(videoTrackIndex);
audioExtractor.selectTrack(audioTrackIndex);
MediaCodec.BufferInfo videoBufferInfo = new MediaCodec.BufferInfo();
MediaCodec.BufferInfo audioBufferInfo = new MediaCodec.BufferInfo();
MediaMuxer mediaMuxer = new MediaMuxer(muxPath, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
int writeVideoTrackIndex = mediaMuxer.addTrack(videoFormat);
int writeAudioTrackIndex = mediaMuxer.addTrack(audioFormat);
mediaMuxer.start();
ByteBuffer byteBuffer = ByteBuffer.allocate(500 * 1024);
long sampleTime = 0;
{
videoExtractor.readSampleData(byteBuffer, 0);
if (videoExtractor.getSampleFlags() == MediaExtractor.SAMPLE_FLAG_SYNC) {
videoExtractor.advance();
}
videoExtractor.readSampleData(byteBuffer, 0);
long secondTime = videoExtractor.getSampleTime();
videoExtractor.advance();
long thirdTime = videoExtractor.getSampleTime();
sampleTime = Math.abs(thirdTime - secondTime);
}
videoExtractor.unselectTrack(videoTrackIndex);
videoExtractor.selectTrack(videoTrackIndex);
while (true) {
int readVideoSampleSize = videoExtractor.readSampleData(byteBuffer, 0);
if (readVideoSampleSize < 0) {
break;
}
videoBufferInfo.size = readVideoSampleSize;
videoBufferInfo.presentationTimeUs += sampleTime;
videoBufferInfo.offset = 0;
//noinspection WrongConstant
videoBufferInfo.flags = MediaCodec.BUFFER_FLAG_SYNC_FRAME;//videoExtractor.getSampleFlags()
mediaMuxer.writeSampleData(writeVideoTrackIndex, byteBuffer, videoBufferInfo);
videoExtractor.advance();
}
while (true) {
int readAudioSampleSize = audioExtractor.readSampleData(byteBuffer, 0);
if (readAudioSampleSize < 0) {
break;
}
audioBufferInfo.size = readAudioSampleSize;
audioBufferInfo.presentationTimeUs += sampleTime;
audioBufferInfo.offset = 0;
//noinspection WrongConstant
audioBufferInfo.flags = MediaCodec.BUFFER_FLAG_SYNC_FRAME;// videoExtractor.getSampleFlags()
mediaMuxer.writeSampleData(writeAudioTrackIndex, byteBuffer, audioBufferInfo);
audioExtractor.advance();
}
mediaMuxer.stop();
mediaMuxer.release();
videoExtractor.release();
audioExtractor.release();
} catch (IOException e) {
e.printStackTrace();
}
}
方法二(Success)
public static void muxVideoAudio(String videoFilePath, String audioFilePath, String outputFile) {
try {
MediaExtractor videoExtractor = new MediaExtractor();
videoExtractor.setDataSource(videoFilePath);
MediaExtractor audioExtractor = new MediaExtractor();
audioExtractor.setDataSource(audioFilePath);
MediaMuxer muxer = new MediaMuxer(outputFile, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
videoExtractor.selectTrack(0);
MediaFormat videoFormat = videoExtractor.getTrackFormat(0);
int videoTrack = muxer.addTrack(videoFormat);
audioExtractor.selectTrack(0);
MediaFormat audioFormat = audioExtractor.getTrackFormat(0);
int audioTrack = muxer.addTrack(audioFormat);
LogUtil.d(TAG, "Video Format " + videoFormat.toString());
LogUtil.d(TAG, "Audio Format " + audioFormat.toString());
boolean sawEOS = false;
int frameCount = 0;
int offset = 100;
int sampleSize = 256 * 1024;
ByteBuffer videoBuf = ByteBuffer.allocate(sampleSize);
ByteBuffer audioBuf = ByteBuffer.allocate(sampleSize);
MediaCodec.BufferInfo videoBufferInfo = new MediaCodec.BufferInfo();
MediaCodec.BufferInfo audioBufferInfo = new MediaCodec.BufferInfo();
videoExtractor.seekTo(0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);
audioExtractor.seekTo(0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);
muxer.start();
while (!sawEOS) {
videoBufferInfo.offset = offset;
videoBufferInfo.size = videoExtractor.readSampleData(videoBuf, offset);
if (videoBufferInfo.size < 0 || audioBufferInfo.size < 0) {
sawEOS = true;
videoBufferInfo.size = 0;
} else {
videoBufferInfo.presentationTimeUs = videoExtractor.getSampleTime();
//noinspection WrongConstant
videoBufferInfo.flags = videoExtractor.getSampleFlags();
muxer.writeSampleData(videoTrack, videoBuf, videoBufferInfo);
videoExtractor.advance();
frameCount++;
}
}
boolean sawEOS2 = false;
int frameCount2 = 0;
while (!sawEOS2) {
frameCount2++;
audioBufferInfo.offset = offset;
audioBufferInfo.size = audioExtractor.readSampleData(audioBuf, offset);
if (videoBufferInfo.size < 0 || audioBufferInfo.size < 0) {
sawEOS2 = true;
audioBufferInfo.size = 0;
} else {
audioBufferInfo.presentationTimeUs = audioExtractor.getSampleTime();
//noinspection WrongConstant
audioBufferInfo.flags = audioExtractor.getSampleFlags();
muxer.writeSampleData(audioTrack, audioBuf, audioBufferInfo);
audioExtractor.advance();
}
}
muxer.stop();
muxer.release();
LogUtil.d(TAG,"Output: "+outputFile);
} catch (IOException e) {
LogUtil.d(TAG, "Mixer Error 1 " + e.getMessage());
} catch (Exception e) {
LogUtil.d(TAG, "Mixer Error 2 " + e.getMessage());
}
}
方法三
利用mp4parser实现
mp4parser是一个视频处理的开源工具箱,由于mp4parser里的方法都依靠工具箱里的一些内容,所以需要将这些内容打包成jar包,放到自己的工程里,才能对mp4parser的方法进行调用。
compile “com.googlecode.mp4parser:isoparser:
1.1
.
21
”
问题:上传Youtube压缩后,视频数据丢失严重,大部分就只剩下一秒钟的时长,相当于把视频变成图片了,囧
public boolean mux(String videoFile, String audioFile, final String outputFile) {
if (isStopMux) {
return false;
}
Movie video;
try {
video = MovieCreator.build(videoFile);
} catch (RuntimeException e) {
e.printStackTrace();
return false;
} catch (IOException e) {
e.printStackTrace();
return false;
}
Movie audio;
try {
audio = MovieCreator.build(audioFile);
} catch (IOException e) {
e.printStackTrace();
return false;
} catch (NullPointerException e) {
e.printStackTrace();
return false;
}
Track audioTrack = audio.getTracks().get(0);
video.addTrack(audioTrack);
Container out = new DefaultMp4Builder().build(video);
FileOutputStream fos;
try {
fos = new FileOutputStream(outputFile);
} catch (FileNotFoundException e) {
e.printStackTrace();
return false;
}
BufferedWritableFileByteChannel byteBufferByteChannel = new
BufferedWritableFileByteChannel(fos);
try {
out.writeContainer(byteBufferByteChannel);
byteBufferByteChannel.close();
fos.close();
if (isStopMux) {
return false;
}
runOnUiThread(new Runnable() {
@Override
public void run() {
mCustomeProgressDialog.setProgress(100);
goShareActivity(outputFile);
// FileUtils.insertMediaDB(AddAudiosActivity.this,outputFile);//
}
});
} catch (IOException e) {
e.printStackTrace();
if (mCustomeProgressDialog.isShowing()) {
mCustomeProgressDialog.dismiss();
}
ToastUtil.showShort(getString(R.string.process_failed));
return false;
}
return true;
}
private static class BufferedWritableFileByteChannel implements WritableByteChannel {
private static final int BUFFER_CAPACITY = 2000000;
private boolean isOpen = true;
private final OutputStream outputStream;
private final ByteBuffer byteBuffer;
private final byte[] rawBuffer = new byte[BUFFER_CAPACITY];
private BufferedWritableFileByteChannel(OutputStream outputStream) {
this.outputStream = outputStream;
this.byteBuffer = ByteBuffer.wrap(rawBuffer);
}
@Override
public int write(ByteBuffer inputBuffer) throws IOException {
int inputBytes = inputBuffer.remaining();
if (inputBytes > byteBuffer.remaining()) {
dumpToFile();
byteBuffer.clear();
if (inputBytes > byteBuffer.remaining()) {
throw new BufferOverflowException();
}
}
byteBuffer.put(inputBuffer);
return inputBytes;
}
@Override
public boolean isOpen() {
return isOpen;
}
@Override
public void close() throws IOException {
dumpToFile();
isOpen = false;
}
private void dumpToFile() {
try {
outputStream.write(rawBuffer, 0, byteBuffer.position());
} catch (IOException e) {
throw new RuntimeException(e);
}
}
}
我是分割线,下面记录是一个切实可行的 录屏轮子。
主要有
public class RecordService extends Service {
private MediaProjection mediaProjection;
private MediaRecorder mediaRecorder;
private VirtualDisplay virtualDisplay;
private boolean running;
private int width = 720;
private int height = 1280;
private int dpi;
private String path;
@Override
public IBinder onBind(Intent intent) {
return new RecordBinder();
}
@Override
public int onStartCommand(Intent intent, int flags, int startId) {
return START_STICKY;
}
@Override
public void onCreate() {
super.onCreate();
HandlerThread serviceThread = new HandlerThread("service_thread",
android.os.Process.THREAD_PRIORITY_BACKGROUND);
serviceThread.start();
running = false;
mediaRecorder = new MediaRecorder();
}
@Override
public void onDestroy() {
super.onDestroy();
}
public void setMediaProject(MediaProjection project) {
mediaProjection = project;
}
public boolean isRunning() {
return running;
}
public void setConfig(int width, int height, int dpi) {
this.width = width;
this.height = height;
this.dpi = dpi;
}
public boolean startRecord() {
if (mediaProjection == null || running) {
return false;
}
initRecorder();
createVirtualDisplay();
mediaRecorder.start();
running = true;
return true;
}
public boolean stopRecord() {
if (!running) {
return false;
}
File file = new File(path);
running = false;
mediaRecorder.stop();
mediaRecorder.reset();
virtualDisplay.release();
mediaProjection.stop();
insertIntoMediaStore(getApplication(), true, file, 0);
Toast.makeText(getApplicationContext(), "已保存进相册 ", Toast.LENGTH_SHORT).show();
return true;
}
private void createVirtualDisplay() {
virtualDisplay = mediaProjection.createVirtualDisplay("MainScreen", width, height, dpi,
DisplayManager.VIRTUAL_DISPLAY_FLAG_AUTO_MIRROR, mediaRecorder.getSurface(), null, null);
}
private void initRecorder() {
// mediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
mediaRecorder.setVideoSource(MediaRecorder.VideoSource.SURFACE);
mediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
path = getsaveDirectory() + System.currentTimeMillis() + ".mp4";
mediaRecorder.setOutputFile(path);
mediaRecorder.setVideoSize(width, height);
mediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.H264);
// mediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
mediaRecorder.setVideoEncodingBitRate(900*1024);
mediaRecorder.setVideoFrameRate(30);
try {
mediaRecorder.prepare();
} catch (IOException e) {
e.printStackTrace();
}
}
public String getsaveDirectory() {
if (Environment.getExternalStorageState().equals(Environment.MEDIA_MOUNTED)) {
String rootDir = Environment.getExternalStorageDirectory().getAbsolutePath() + "/" + "epk_video" + "/";
File file = new File(rootDir);
if (!file.exists()) {
if (!file.mkdirs()) {
return null;
}
}
// Toast.makeText(getApplicationContext(), rootDir, Toast.LENGTH_SHORT).show();
return rootDir;
} else {
return null;
}
}
public class RecordBinder extends Binder {
public RecordService getRecordService() {
return RecordService.this;
}
}
//针对非系统影音资源文件夹
public static void insertIntoMediaStore(Context context, boolean isVideo, File saveFile, long createTime) {
ContentResolver mContentResolver = context.getContentResolver();
if (createTime == 0)
createTime = System.currentTimeMillis();
ContentValues values = new ContentValues();
values.put(MediaStore.MediaColumns.TITLE, saveFile.getName());
values.put(MediaStore.MediaColumns.DISPLAY_NAME, saveFile.getName());
//值一样,但是还是用常量区分对待
values.put(isVideo ? MediaStore.Video.VideoColumns.DATE_TAKEN
: MediaStore.Images.ImageColumns.DATE_TAKEN, createTime);
values.put(MediaStore.MediaColumns.DATE_MODIFIED, System.currentTimeMillis());
values.put(MediaStore.MediaColumns.DATE_ADDED, System.currentTimeMillis());
if (!isVideo)
values.put(MediaStore.Images.ImageColumns.ORIENTATION, 0);
values.put(MediaStore.MediaColumns.DATA, saveFile.getAbsolutePath());
values.put(MediaStore.MediaColumns.SIZE, saveFile.length());
values.put(MediaStore.MediaColumns.MIME_TYPE, isVideo ? getVideoMimeType(saveFile.getPath()) : "image/jpeg");
//插入
mContentResolver.insert(isVideo
? MediaStore.Video.Media.EXTERNAL_CONTENT_URI
: MediaStore.Images.Media.EXTERNAL_CONTENT_URI, values);
}
// 获取video的mine_type,暂时只支持mp4,3gp
private static String getVideoMimeType(String path) {
String lowerPath = path.toLowerCase();
if (lowerPath.endsWith("mp4") || lowerPath.endsWith("mpeg4")) {
return "video/mp4";
} else if (lowerPath.endsWith("3gp")) {
return "video/3gp";
}
return "video/mp4";
}
}
在开始录屏的界面
private MediaProjectionManager projectionManager;
private MediaProjection mediaProjection;
private RecordService recordService;
startService(new Intent(this, RecordService.class));
projectionManager = (MediaProjectionManager) getSystemService(MEDIA_PROJECTION_SERVICE);
Intent intent = new Intent(this, RecordService.class);
bindService(intent, connection, BIND_AUTO_CREATE);
// 开始录屏
if (!recordService.isRunning()) {
Intent captureIntent = null;
if (android.os.Build.VERSION.SDK_INT >= android.os.Build.VERSION_CODES.LOLLIPOP) {
captureIntent = projectionManager.createScreenCaptureIntent();
startActivityForResult(captureIntent, RECORD_REQUEST_CODE);
}
}
// 结束录屏
if (recordService.isRunning()) {
recordService.stopRecord();
}
@Override
protected void onDestroy() {
super.onDestroy();
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP) {
recordService.stopRecord();
}
unbindService(connection);
}
完工~
当然亲测华为手机完美适配,其他手机不保证了。
另外 找一个轮子,很多视频参数调试。恶心的不行
不过好像和手机兼容完美,目前 华为,小米,oppo,锤子都应该没有问题。打算使用这里记录下
origin | https://github.com/yrom/ScreenRecorder.git |