视频裁剪没声音?解决视频裁剪没声音问题,实现快速裁剪,只需裁剪时间,文件源,一切简单

这阵子项目上线了,但是更大的麻烦来了。。那就是要做视频处理啊。。。。先来解决视频裁剪吧,之前有看过一些视频裁剪的,但是基本上都是没用的,但是还是有了思路,那就是用google推出的mp4parser,(其实我比较向往用ffmpeg来做任何视频的处理,但是底子不够,不会那些底层啊),查阅源码后发现还是比较简单的,但是有个缺陷,只可以搞mp4视频格式啊,我的内心是一万个曹尼玛啊。。。。不多说上源码:

import org.mp4parser.Container;
import org.mp4parser.muxer.Movie;
import org.mp4parser.muxer.Track;
import org.mp4parser.muxer.builder.DefaultMp4Builder;
import org.mp4parser.muxer.container.mp4.MovieCreator;
import org.mp4parser.muxer.tracks.ClippedTrack;


import java.io.FileOutputStream;
import java.io.IOException;
import java.io.RandomAccessFile;
import java.nio.channels.FileChannel;
import java.sql.Time;
import java.util.Arrays;
import java.util.Calendar;
import java.util.Date;
import java.util.LinkedList;
import java.util.List;

public class ShortenVideo {
    public static void videoCut(String srcVideoPath, String dstVideoPath, String name, double[] times) throws IOException {
        int dstVideoNumber = times.length / 2;

        Calendar calendar = Calendar.getInstance();
        calendar.get(Calendar.HOUR_OF_DAY);
        calendar.get(Calendar.MINUTE);

        String[] dstVideoPathes = new String[dstVideoNumber];

        long time = System.currentTimeMillis();
        Date date = new Date(time);
        java.text.SimpleDateFormat sDateFormat = new java.text.SimpleDateFormat(
                "yyyy-MM-dd HH:mm:ss");

        for (int i = 0; i < dstVideoNumber; i++) {
            dstVideoPathes[i] = dstVideoPath + "/" + "cutMovie-" + sDateFormat.format(date) + "'\n’" + name;
        }
        int timesCount = 0;

        for (int idst = 0; idst < dstVideoPathes.length; idst++) {

            Movie movie = MovieCreator.build(srcVideoPath);

            List<Track> tracks = movie.getTracks();
            movie.setTracks(new LinkedList<Track>());
            // remove all tracks we will create new tracks from the old
            double startTime1 = times[timesCount];
            double endTime1 = times[timesCount + 1];
            timesCount = timesCount + 2;
            boolean timeCorrected = false;
            // Here we try to find a track that has sync samples. Since we can only start decoding
            // at such a sample we SHOULD make sure that the start of the new fragment is exactly
            // such a frame
            for (Track track : tracks) {
                if (track.getSyncSamples() != null && track.getSyncSamples().length > 0) {
                    if (timeCorrected) {
                        // This exception here could be a false positive in case we have multiple tracks
                        // with sync samples at exactly the same positions. E.g. a single movie containing
                        // multiple qualities of the same video (Microsoft Smooth Streaming file)

                        throw new RuntimeException("The startTime has already been corrected by another track with SyncSample. Not Supported.");
                    }
                    startTime1 = correctTimeToSyncSample(track, startTime1, false);
                    endTime1 = correctTimeToSyncSample(track, endTime1, true);

                    timeCorrected = true;
                }
            }

            for (Track track : tracks) {
                long currentSample = 0;
                double currentTime = 0;
                double lastTime = -1;
                long startSample1 = -1;
                long endSample1 = -1;


                for (int i = 0; i < track.getSampleDurations().length; i++) {
                    long delta = track.getSampleDurations()[i];


                    if (currentTime > lastTime && currentTime <= startTime1) {
                        // current sample is still before the new starttime
                        startSample1 = currentSample;
                    }
                    if (currentTime > lastTime && currentTime <= endTime1) {
                        // current sample is after the new start time and still before the new endtime
                        endSample1 = currentSample;
                    }

                    lastTime = currentTime;
                    currentTime += (double) delta / (double) track.getTrackMetaData().getTimescale();
                    currentSample++;
                }
                //movie.addTrack(new AppendTrack(new ClippedTrack(track, startSample1, endSample1), new ClippedTrack(track, startSample2, endSample2)));
                movie.addTrack(new ClippedTrack(track, startSample1, endSample1));
            }
            long start1 = System.currentTimeMillis();
            Container out = new DefaultMp4Builder().build(movie);

            long start2 = System.currentTimeMillis();
            FileOutputStream fos = new FileOutputStream(String.format(dstVideoPathes[idst]));
            FileChannel fc = fos.getChannel();
            out.writeContainer(fc);

            fc.close();
            fos.close();
            long start3 = System.currentTimeMillis();

        }
    }

    private static double correctTimeToSyncSample(Track track, double cutHere, boolean next) {
        double[] timeOfSyncSamples = new double[track.getSyncSamples().length];
        long currentSample = 0;
        double currentTime = 0;
        for (int i = 0; i < track.getSampleDurations().length; i++) {
            long delta = track.getSampleDurations()[i];

            if (Arrays.binarySearch(track.getSyncSamples(), currentSample + 1) >= 0) {
                // samples always start with 1 but we start with zero therefore +1
                timeOfSyncSamples[Arrays.binarySearch(track.getSyncSamples(), currentSample + 1)] = currentTime;
            }
            currentTime += (double) delta / (double) track.getTrackMetaData().getTimescale();
            currentSample++;

        }
        double previous = 0;
        for (double timeOfSyncSample : timeOfSyncSamples) {
            if (timeOfSyncSample > cutHere) {
                if (next) {
                    return timeOfSyncSample;
                } else {
                    return previous;
                }
            }
            previous = timeOfSyncSample;
        }
        return timeOfSyncSamples[timeOfSyncSamples.length - 1];
    }
}
这是我修改过了的源码,之前看别人的代码是这样的,点击剪切视频,随机生成3端视频。。。第二次裁剪会覆盖之前的视频裁剪,所以我改了下。现在是你输入开始裁剪时间和结束裁剪时间,就OK了,也不用担心覆盖问题,之所以覆盖是因为文件名重复了,所以我加了一个时间作为标识,这样就O了。到现在还没扯到标题上面,,那就是剪切后没声音,原因很简单,导入的包不对,这个东西不同的包差别有点大。。。这样就哦了,上面是个工具类,可以直接用,如果有什么疑问可以在下面的评论留言哦,最近我一直在留意博客

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
微信小程序裁剪视频的具体实现代码如下: 1. 在 wxml 文件中添加 video 组件和两个按钮,用于开始和结束裁剪。 ``` <video id="myVideo" src="{{src}}" controls></video> <button type="primary" bindtap="startCutting">开始裁剪</button> <button type="primary" bindtap="endCutting">结束裁剪</button> ``` 2. 在 js 文件中监听按钮的点击事件,获取 video 组件的上下文和视频总时长,并在开始裁剪时记录当前时间,结束裁剪时计算裁剪后的时长。 ``` data: { src: '', // 视频地址 ctx: null, // video 组件的上下文 duration: 0, // 视频总时长 startTime: 0, // 裁剪开始时间 endTime: 0 // 裁剪结束时间 }, onLoad() { this.ctx = wx.createVideoContext('myVideo', this); }, onReady() { this.ctx.pause(); // 加载后暂停播放 this.ctx.duration((res) => { this.duration = res.duration; }); }, startCutting() { this.ctx.play(); // 开始播放 this.startTime = this.ctx.currentTime; // 记录开始时间 }, endCutting() { this.ctx.pause(); // 暂停播放 this.endTime = this.ctx.currentTime; // 记录结束时间 const cutDuration = this.endTime - this.startTime; // 计算裁剪后的时长 console.log(`裁剪后的时长为 ${cutDuration}`); } ``` 3. 在 wxss 文件中设置 video 的样式。 ``` video { width: 100%; height: 300rpx; } ``` 注意:微信小程序裁剪视频要用户授权选择视频文件,并且要在项目配置中开启对应的权限。具体实现方法可以参考微信小程序官方文档。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值