安卓端PAG转视频提速方案

文章讲述了在Android平台上,使用腾讯的PAG库进行视频模板功能开发时遇到的视频保存速度慢的问题。通过分析发现,主要耗时在MediaCodec的编码过程中。为了解决这个问题,作者尝试使用多线程并结合ffmpeg库将视频分片处理后再合并,从而显著减少了视频保存的时间,将41秒视频的保存时间从60秒降低到20-25秒,提升了用户体验。
摘要由CSDN通过智能技术生成

场景:

最近公司的新需求,要用腾讯的开源Pag库来做视频模板的功能,但是Android端在使用的过程中发现,直接使用Demo中提供的Pag转视频保存方式,保存速度巨慢,有多慢呢?41秒的视频需要花大概60秒(不同型号手机上可能时间有差异,但是都是很慢),相比较之下,IOS端则只需要十五秒左右即可完成视频保存。
欠缺的用户体验自然需要程序员转动大脑,去优化加速了。

Demo中的保存方法:

   // video export
    private void pagExportToMP4() {
        try {
            prepareEncoder();
            int totalFrames = (int) (pagFile.duration() * pagFile.frameRate() / 1000000);
            for (int i = 0; i < totalFrames; i++) {
                // Feed any pending encoder output into the muxer.
                drainEncoder(false);
                generateSurfaceFrame(i);
                if (VERBOSE) Log.d(TAG, "sending frame " + i + " to encoder");
            }
            drainEncoder(true);
        } finally {
            releaseEncoder();
        }
        Log.d(TAG, "encode finished!!! \n");
    }

Demo中使用的是Android SDK自带的解码器,MediaCodec,来把Pag里面的每一帧导入到视频中。

大致流程是首先prepareEncoder,初始化MediaCodec和相关的一些参数,然后在for循环中,读取PAGFile的每一帧,写入到MediaCodec中去,最后结束,释放资源。

由于我并没有Android音视频开发相关的经验,所以想要从这一段核心代码入手调优就很困难,那么想加速就得另辟蹊径了。

多线程

是的,最偷懒的方式就是开多个线程,多个线程同时工作,那自然就可以大大减少整个PagFile解码转化的时间了。代码如下:

suspend fun saveToLocal(
    c: CoroutineScope, pagFile: ArrayList<PAGFile>, onResult: (code: Int, msg: String) -> Unit
) {
    val totalFrames: Int = (pagFile[0].duration() * pagFile[0].frameRate() / 1000000).toInt()
    withContext(Dispatchers.IO) {
        val part1 =
            async { encoderVideo(c, "part1", pagFile[0], 0, totalFrames / 4, onResult) }
        val part2 =
            async {
                encoderVideo(
                    c,
                    "part2",
                    pagFile[1],
                    totalFrames / 4,
                    totalFrames * 2 / 4,
                    onResult
                )
            }
        val part3 =
            async {
                encoderVideo(
                    c,
                    "part3",
                    pagFile[2],
                    totalFrames * 2 / 4,
                    totalFrames * 3 / 4,
                    onResult
                )
            }
        val part4 =
            async {
                encoderVideo(
                    c,
                    "part4",
                    pagFile[3],
                    totalFrames * 3 / 4,
                    totalFrames,
                    onResult
                )
            }


        val file = File("$savaParentPath/mp4list.txt")
        if (file.exists()) {
            file.delete()
        }
        try {
            file.createNewFile()
            val fw = FileWriter("$savaParentPath/mp4list.txt");
            val bw = BufferedWriter(fw);
            bw.write("file '" + part1.await() + "'\n")
            bw.write("file '" + part2.await() + "'\n")
            bw.write("file '" + part3.await() + "'\n")
            bw.write("file '" + part4.await() + "'\n")
            bw.close();
            fw.close()
        } catch (e: Exception) {
            e.printStackTrace()
        }
    }
    if (!c.isActive) {
        return
    }
    val mergeVideo =
        "ffmpeg -y -f concat -safe 0 -i $savaParentPath/mp4list.txt -c copy $VIDEO_PATH"

    Log.v("FFmpegExtractor", mergeVideo)
    RxFFmpegInvoke.getInstance().setDebug(true)
    RxFFmpegInvoke.getInstance().runCommandRxJava(mergeVideo.split(" ").toTypedArray())
        .subscribe(object : RxFFmpegSubscriber() {
            override fun onError(p0: String?) {
                onResult.invoke(FAILED, p0 ?: "error")
            }

            override fun onFinish() {
                val videoPath = VIDEO_PATH
                val voicePath =
                    SharedPreferenceProvider.getInstance()
                        .getAppSPString(VOICE_TIPS + pagFile[0].path(), "")
                val targetPath =
                    LTDMusicHelper.getSaveVideoDir(MyApplication.getInstance()).absolutePath + "/" + System.currentTimeMillis()
                        .toString() + ".mp4"

                val command =
                    "ffmpeg -y -i $videoPath -i $voicePath -map 0:v -vcodec copy -map 1:a -acodec copy $targetPath"

                RxFFmpegInvoke.getInstance().runCommandRxJava(command.split(" ").toTypedArray())
                    .subscribe(object : RxFFmpegSubscriber() {
                        override fun onError(p0: String?) {
                            onResult(FAILED, p0 ?: "error")
                        }

                        override fun onFinish() {
                            onResult(SUCCESS, targetPath)
                        }

                        override fun onProgress(p0: Int, p1: Long) {

                        }

                        override fun onCancel() {

                        }
                    })
            }

            override fun onProgress(p0: Int, p1: Long) {
            }

            override fun onCancel() {

            }
        })
}

private const val lock = "0"

private suspend fun encoderVideo(
    c: CoroutineScope,
    name: String,
    pagFile: PAGFile,
    startFrame: Int,
    endFrame: Int,
    onResult: (code: Int, msg: String) -> Unit
): String {
    val utils = SavePagUtils()
    withContext(Dispatchers.IO) {
        try {
            utils.prepareEncoder(pagFile, name)
            for (i in startFrame until endFrame) {
                if (!c.isActive) {
                    utils.drainEncoder(true)
                    return@withContext
                }
                // Feed any pending encoder output into the muxer.
                utils.drainEncoder(false)
                utils.generateSurfaceFrame(pagFile, i)
            }
            utils.drainEncoder(true)
        } catch (e: Exception) {
            e.printStackTrace()
        } finally {
            try {
                utils.releaseEncoder()
            } catch (e: Exception) {
                e.printStackTrace()
            }
        }
    }
    return savaParentPath.toString() + "/$name.mp4"
}


private class SavePagUtils {
    private var mEncoder: MediaCodec? = null
    private var mMuxer: MediaMuxer? = null
    private var mTrackIndex = 0
    private var mMuxerStarted = false
    private var mBufferInfo: MediaCodec.BufferInfo? = null
    private val mBitRate = 8000000
    private var pagPlayer: PAGPlayer? = null
    private val pagComposition: PAGComposition? = null

    private val MIME_TYPE = "video/avc" // H.264 Advanced Video Coding
    private val FRAME_RATE = 30
    private val IFRAME_INTERVAL = 10 // 10 seconds between I-frames

    fun prepareEncoder(pagFile: PAGFile, name: String = "template_video") {
        mBufferInfo = MediaCodec.BufferInfo()
        var width = pagFile.width()
        var height = pagFile.height()
        if (width % 2 == 1) {
            width--
        }
        if (height % 2 == 1) {
            height--
        }
        val format = MediaFormat.createVideoFormat(MIME_TYPE, width, height)
        format.setInteger(
            MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface
        )
        format.setInteger(MediaFormat.KEY_BIT_RATE, mBitRate)
        format.setInteger(MediaFormat.KEY_FRAME_RATE, FRAME_RATE)
        format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, IFRAME_INTERVAL)
        try {
            mEncoder = MediaCodec.createEncoderByType(MIME_TYPE)
        } catch (e: IOException) {
            e.printStackTrace()
        }
        mEncoder!!.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE)
        if (pagPlayer == null) {
            val pagSurface = PAGSurface.FromSurface(mEncoder!!.createInputSurface())
            pagPlayer = PAGPlayer()
            pagPlayer!!.surface = pagSurface
            pagPlayer!!.composition = pagFile
            pagPlayer!!.progress = 0.0
        }
        mEncoder!!.start()
        val outputPath = File(
            "$savaParentPath/$name.mp4"
        ).toString()
        if (File(outputPath).exists()) {
            File(outputPath).delete()
        }
        mMuxer = try {
            MediaMuxer(outputPath, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4)
        } catch (ioe: IOException) {
            throw RuntimeException("MediaMuxer creation failed", ioe)
        }
        mTrackIndex = -1
        mMuxerStarted = false
    }

    /**
     * Releases encoder resources.  May be called after partial / failed initialization.
     */
    fun releaseEncoder() {
        if (mEncoder != null) {
            mEncoder!!.stop()
            mEncoder!!.release()
            mEncoder = null
        }
        if (mMuxer != null) {
            mMuxer!!.stop()
            mMuxer = null
        }
        pagPlayer = null
    }

    fun drainEncoder(endOfStream: Boolean) {
        if (endOfStream) {
            mEncoder!!.signalEndOfInputStream()
        }
        var encoderOutputBuffers = mEncoder!!.outputBuffers
        while (true) {
            val encoderStatus = mEncoder!!.dequeueOutputBuffer(
                mBufferInfo!!, (10000 * 60 / FRAME_RATE).toLong()
            )
            if (encoderStatus == MediaCodec.INFO_TRY_AGAIN_LATER) {
                // no output available yet
                if (!endOfStream) {
                    break // out of while
                } else {

                }
            } else if (encoderStatus == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
                // not expected for an encoder
                encoderOutputBuffers = mEncoder!!.outputBuffers
            } else if (encoderStatus == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
                // should happen before receiving buffers, and should only happen once
                if (mMuxerStarted) {
                    throw RuntimeException("format changed twice")
                }
                val newFormat = mEncoder!!.outputFormat

                // now that we have the Magic Goodies, start the muxer
                mTrackIndex = mMuxer!!.addTrack(newFormat)
                mMuxer!!.start()
                mMuxerStarted = true
            } else if (encoderStatus < 0) {

                // let's ignore it
            } else {
                val encodedData = encoderOutputBuffers[encoderStatus] ?: throw RuntimeException(
                    "encoderOutputBuffer " + encoderStatus + " was null"
                )
                if (mBufferInfo!!.flags and MediaCodec.BUFFER_FLAG_CODEC_CONFIG != 0) {
                    // The codec config data was pulled out and fed to the muxer when we got
                    // the INFO_OUTPUT_FORMAT_CHANGED status.  Ignore it.

                    mBufferInfo!!.size = 0
                }
                if (mBufferInfo!!.size != 0) {
                    if (!mMuxerStarted) {
                        throw RuntimeException("muxer hasn't started")
                    }
                    // adjust the ByteBuffer values to match BufferInfo (not needed?)
                    encodedData.position(mBufferInfo!!.offset)
                    encodedData.limit(mBufferInfo!!.offset + mBufferInfo!!.size)
                    mMuxer!!.writeSampleData(mTrackIndex, encodedData, mBufferInfo!!)
                }
                mEncoder!!.releaseOutputBuffer(encoderStatus, false)
                if (mBufferInfo!!.flags and MediaCodec.BUFFER_FLAG_END_OF_STREAM != 0) {
                    break // out of while
                }
            }
        }
    }

    fun generateSurfaceFrame(pagFile: PAGFile, frameIndex: Int) {
        val totalFrames = (pagFile.duration() * pagFile.frameRate() / 1000000).toInt()
        val progress = frameIndex % totalFrames * 1.0f / totalFrames
        if (progress >= 100) {
            return
        }
        synchronized(lock) {
            pagPlayer!!.progress = progress.toDouble()
            pagPlayer!!.flush()
        }
    }
}

路径相关的参数看自己想存哪,就写哪个就好了

核心代码:
一、开了四个线程去encoder PagFile,那自然需要创建四个pagFile了,然后每个都要对应创建一个MediaCodec,去从不同的进度点解析。
二、用了RxFFmpeg的三方库,把四段视频组合成一段视频,然后还要把声音文件也合进去。
三、关于 generateSurfaceFrame 函数中的 线程锁,是因为 pagPlayer.flush() 这个方法有并发问题,去掉锁之后,多个线程同时获取帧数据,会有可能出现 PAG和图片移位的问题,加了锁就好了,等于只优化了drainEncoder方法。

总结:

其实PAGFile的解码时间主要就花在

utils.drainEncoder(false)
utils.generateSurfaceFrame(pagFile, i)

这两个方法上面,因为要循环特别多次,视频的每一帧都需要调用一次这两个方法,我加日志大概看了下,这两个方法每次调用一共平均大概得花50ms。所以一个 30帧 × 40s 的视频,就需要 1200 * 50ms = 60s的时间去保存,实在是太久太久了。

而使用多线程方案优化之后,在安卓端则只需要 20s ~ 25s 即可把PagFile转化为视频,有兴趣的同学可以尝试加更多的线程,我试了下四个线程是比较稳定的了,虽然还是比不上IOS,但是已经是很大的进步了!!!也在公司产品可接收的范围内了。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值