微信小程序裁剪视频部分内容导出

如果是单纯的想裁剪视频的话,微信小程序提供了一个非常便捷的API openVideoEditor
如果只是想获取视频的第一帧的话,可以参考我这个文章 从0开始的canvas学习(二)
如果要更换音频轨道,或者在为视频增加什么样式,就会分别获取到视频的视频轨道和音频轨道和视频的每一帧
首先是获取视频的每一帧

1·获取视频的帧

微信小程序提供了 createVideoDecoder API用来解析视频的帧数,虽然我用的时候总是感觉丢帧,但目前也没找到别的办法

步骤
要获取视频的帧,首先要知道一个视频有多少帧
通过wx.getVideoInfo获取到视频信息包括fps和duration
我们可以通过上面获取的信息获取视频总共有多少帧数(fps*duration)

下面我们来获取视频的帧,并生成每一帧的图片,会用到 createOffscreenCanvas这个API用来创建一个离屏画布

首先如果是选择视频的话可以直接获取到视频信息,如果是网络链接的要通过 wx.downloadFile 把视频下拉获取到本地的临时链接
可以是网络视频 也可以是自己的视频,下面的案例是我本地的视频,视频时长不宜太长,不然小程序容易崩掉

关键逻辑处理在getFrameData这个方法中,大致意思:获取的帧的数据是一个数组里面存储着图片的像素值

如果是1px*1px的图像那这个数组的长度就为4[0,0,0,255]代表着一个黑色不透明的像素点,图像就是由这么一个一个的像素组成,我们可以通过canvasAPI把这些像素绘制出来并且导出

需要注意的是,如果导入像素点的数量和画布实际的绘制大小有出入会导致错误,就像上面说的一样,如果你要把[0,0,0,255]绘制到两个像素这明显是不合理的

 Page({
 // 链接可能会过期换成自己的就行
	data:{
	    "audioUrl": "https://dl.stream.qqmusic.qq.com/C4000026gMkr1L9tKN.m4a?guid=7840612806&vkey=4696DBD6111F38EC3D75D0F4FDF769B3736068A795CF585BA18DF3A4BCD8F83D0662B54D70F64934AF74A34C9A354AFF0AA4660CCE4C66F0&uin=&fromtag=120032",
    videoUrl: "https://yun-live.oss-cn-shanghai.aliyuncs.com/video/20201225/2EbJdXN5ZG.mp4",
    videoInfo: {},
    canvasWidth: 0,
    canvasHeight: 0,
    fps: 0,
    duration: 0,
    imageList: []
	 },
	  onLoad() {
	    this.videoDecoderStart()
	  },
	   async videoDecoderStart() {
	   // 自己选择视频
    var {tempFilePath} = await wx.chooseVideo()
    console.log(tempFilePath)
     // 网络视频
    // wx.showLoading({
    //   title: '加载中...',
    // })
    // var {
    //   tempFilePath
    // } = await this.getTempPath(this.data.videoUrl)
    // wx.hideLoading()
    var videoInfo = await wx.getVideoInfo({
      src: tempFilePath,
    })
    this.setData({
      videoInfo: videoInfo,
      canvasWidth: videoInfo.width,
      canvasHeight: videoInfo.height,
      duration: videoInfo.duration,
      fps: videoInfo.fps
    }, () => {
      // 创建视频解析器
      this.videoDecoder = wx.createVideoDecoder()
      const {
        canvas,
        context
      } = this.initOffscreenCanvas(this.data.canvasWidth, this.data.canvasWidth)
      this.videoDecoder.on("start", () => {
        this.videoDecoder.seek(0)
        this.timer = setInterval(() => {
          this.getFrameData(canvas, context)
        }, 300);
      })
      this.videoDecoder.on("seek", () => {})
      this.videoDecoder.on("stop", () => {})
      this.videoDecoder.start({
        source: tempFilePath
      })
    })
  },
  getFrameData(canvas, context) {
    var func = (() => {
      var imageVideoData = this.videoDecoder.getFrameData()
      if (imageVideoData) {
        if (this.timer) {
          clearInterval(this.timer)
          this.timer = null
        }
        context.clearRect(0, 0, this.data.canvasWidth, this.data.canvasHeight)
        var imgData = context.createImageData(this.data.canvasWidth, this.data.canvasHeight)
        var clampedArray = new Uint8ClampedArray(imageVideoData.data)
        for (var i = 0; i < clampedArray.length; i++) {
          imgData.data[i] = clampedArray[i];
        }
        context.putImageData(imgData, 0, 0)
        this.data.imageList.push(canvas.toDataURL())
        this.setData({
          imageList: this.data.imageList
        }, () => {
          canvas.requestAnimationFrame(func)
        })
      }
    })
    func()

  },
 	getTempPath(url){
   	 return new Promise((r,j) => {
      wx.downloadFile({
        url: url,
        success:r,
        fail:j
      })
    })
  },
  // 创建一个离屏画布,因为只是单纯的绘制图片,就不要预览了只需要看
   initOffscreenCanvas(canvasWidth,canvasHeight){
    const canvas = wx.createOffscreenCanvas({
      type: '2d',
      width: canvasWidth,
      height: canvasHeight
    })
   return {
      canvas,
      context: canvas.getContext('2d')
    }
  },
})
 
2·展示视频的帧

获取到视频的帧之后,当然就要展示出来,下面是我展示的一些样式以及代码

请添加图片描述
代码部分

index.wxml
<view>length:{{imageList.length}} fps:{{fps}} duration:{{duration}}</view>
<view  class="content grid" >
  <view  style="background: #f7f7f7;width: 100%;height: 100%;" wx:for="{{imageList}}">
    <image style="width: 100%;height: 100%;"  mode="aspectFit"  bindtap="per" data-index="{{index}}"  src="{{item}}"   />
  </view>
</view>
index.wxss
.grid {
  display: grid;
  grid-template-columns: repeat(4, calc(25vw - 8rpx * 3 / 4));
  gap: 8rpx;
  /* grid-template-rows:repeat( 200rpx); */
  grid-auto-rows: calc(25vw - 8rpx * 3 / 4);
}
3·绘制视频的帧

目前已经将视频的帧读出来了,然后要把这些图片再转化为视频,我们需要微信官方提供的APIwx.createMediaRecorder画面录制器,根据官方介绍我们知道这个是把webgl画布上的内容逐帧录制然后导出,这webgl我们使用three.js插件
在这里插入图片描述
首先我们先在canvas中绘制一些内容

// wxml 部分
// <canvas type="webgl" id="target1" style="{{'width: '+(300)+'px; height: '+(300)+'px;'}}"></canvas>

 onLoad() {
    this.startRecorder()
  },
async startRecorder() {
    wx.showLoading({
      title: '加载中...',
    })
    var {
      tempFilePath
    } = await this.getTempPath('https://profile-avatar.csdnimg.cn/110c65f1445e48bca8c80b3c2f4bbaf0_zhoulib__.jpg!2')
    wx.hideLoading()
    console.log(tempFilePath)
    this.webglCanvas = await this.getWebglCanvas('target1')
    this.render = await this.drawWebGLCanvas(this.webglCanvas,tempFilePath)
      await this.webglCanvas.requestAnimationFrame(this.render.bind())
  },
getWebglCanvas(id) {
    return new Promise((resolve) => {
      this.createSelectorQuery()
        .select('#' + id)
        .node(res => resolve(res.node))
        .exec();
    });
  },
 async drawWebGLCanvas(canvas,tempFilePath ) {
    var that = this
    const THREE = createScopedThreejs(canvas)
    var camera, scene, renderer;
    var mesh;
    camera = new THREE.PerspectiveCamera(70, canvas.width / canvas.height, 1, 4000);
    camera.position.set(0, 0, 400);
    scene = new THREE.Scene();
    renderer = new THREE.WebGLRenderer({
      antialias: false
    });
    renderer.setPixelRatio(1);
    renderer.setSize(canvas.width, canvas.height);
    var geometry = new THREE.BoxBufferGeometry(200, 200, 200);
    // material.needsUpdate = true
    var texture = await new Promise(resolve => new THREE.TextureLoader().load(tempFilePath , resolve));
    texture.minFilter = THREE.LinearFilter
    // material.map = texture
    // geometry.needsUpdate = true;
    var material = new THREE.MeshBasicMaterial({
      map: texture
    });
    material.needsUpdate = true
    mesh = new THREE.Mesh(geometry, material);
    scene.add(mesh);
    
    return async function render(imgData) {
      mesh.rotation.x += 0.005;
      mesh.rotation.y += 0.1;
      renderer.render(scene, camera);
      // 因为一会要动态修改材质所以使用个await
     await that.webglCanvas.requestAnimationFrame(render.bind())
    }
  },

在这里插入图片描述
可以看到有一个正方体,可以把图片当作材质渲染到上面,那么方向就会明确了(这个是一直旋转的)

下面的操作步骤
1、获取视频的帧
2、把视频的帧绘制到webGL上面
3、通过画面录制器逐帧录制webGL上的当作视频的帧
注:部分方法在上面可以搜到,就不重复写了

// wxml部分内容
// <video  src="{{videoPath}}"></video>
// <button bindtap="startRecorder">开始</button>
// <canvas type="webgl" id="target1" style="{{'width: '+(300)+'px; height: '+(300)+'px;'}}"></canvas>

data: {
	 videoPath:""
  },
 onLoad() {
 	this.videoDecoderStart()
 },
  initVideoRecorder(webglCanvas, fps, duration) {
    return wx.createMediaRecorder(webglCanvas, {
      fps: fps,
      duration: duration * 1000
    })
  },
  getWebglCanvas(id) {
    return new Promise((resolve) => {
      this.createSelectorQuery()
        .select('#' + id)
        .node(res => resolve(res.node))
        .exec();
    });
  },
  async startRecorder() {
    this.webglCanvas = await this.getWebglCanvas('target1')
    this.render = await this.drawWebGLCanvas(this.webglCanvas)
    this.videoRecorder = this.initVideoRecorder(this.webglCanvas, this.data.fps, this.data.duration)
    await new Promise(resolve => {
      this.videoRecorder.on('start', resolve)
      this.videoRecorder.start()
    })
    for(let i = 0; i < this.data.imageList.length;i++){
      await this.render(this.data.imageList[i])
      await new Promise(r => this.videoRecorder.requestFrame(r))
    }
    const {
      tempFilePath: videoPath
    } = await this.videoRecorder.stop()
    console.log(videoPath)
    this.setData({
      videoPath
    })
    this.videoRecorder.destroy()
  },

在这里插入图片描述

4·添加音轨

完成上面的步骤之后我们会发现视频没有声音,那是因为我们只是把帧转化为视频导出,所以才没有声音
只有视频轨道没有音频轨道
我们要实现视频轨道和音频轨道结合的话,这里微信提供了wx.createMediaContainer,这个API可以视频将视频的视频轨道和音频轨道分离,如果视频没有声音的话那就没有音频轨道,分离的轨道tracks是一个数组,没有声音的话只有一个内容就像下面这样
请添加图片描述
这里是获取视频轨道或者音频轨道的方法
这里我创建这个对象的时候用了单例设计思想 因为如果不这样写的话导出视频一直为空

  getAbort(source) {
    return new Promise(async relsove => {
      this.abortMediaContainer = this.abortMediaContainer ? this.abortMediaContainer : wx.createMediaContainer()
      this.abortMediaContainer.extractDataSource({
        source,
        success: relsove
      })
    })
  },

使用(在之前的方法里面进行内容补充)
下面的代码逻辑是在视频被选中后获取到它的音频轨道,然后在视频处理结束后导出的时候将之前的音频轨道和处理之后的视频结合

  async videoDecoderStart() {
	 var {
      tempFilePath
    } = await wx.chooseVideo()
  	var {tracks} = await this.getAbort(tempFilePath)
   this.audioTrack = tracks.find(ele => ele.kind == 'audio')
   ....
  }
  async startRecorder() {
   // 接连上面这个方法的内容...
 		const {tempFilePath: videoPath} = await this.videoRecorder.stop()
	    this.setData({
	      videoPath
	    }, async () => {
	     var {tracks} = await this.getAbort(this.data.videoPath)
	     var videoTracks = tracks.find(ele => ele.kind == 'video')
	     if(videoTracks && this.audioTrack ){
	       this.abortMediaContainer.addTrack(videoTracks)
	       this.abortMediaContainer.addTrack(this.audioTrack)
	       // 这里slice可以对视频音频规矩的长度进行裁剪,0ms,10000ms下面的参数意义
	    //  console.log(videoTracks.slice(0,10000))
	      //  console.log(this.audioTrack.slice(0,10000))
	      this.abortMediaContainer.export({
	        success:async  (res) => {
	       var {tracks} = await this.getAbort(res.tempFilePath)
	          console.log(tracks)
	          this.setData({
	            exportSrc: res.tempFilePath
	          })
	        }
	       })
	     }
	    })
  }

然后导出的视频就会有声音了

所有代码
<view>length:{{imageList.length}} fps:{{fps}} duration:{{duration}}</view>
<view  class="content grid" >
  <view  style="background: #f7f7f7;width: 100%;height: 100%;" wx:for="{{imageList}}">
    <image style="width: 100%;height: 100%;"  mode="aspectFit"  bindtap="per" data-index="{{index}}"  src="{{item}}"   />
  </view>
</view>
<video  src="{{videoPath}}"></video>
<video  src="{{exportSrc}}"></video>
<button bindtap="startRecorder">开始</button>
<canvas type="webgl" id="target1" style="{{'width: '+(300)+'px; height: '+(300)+'px;'}}"></canvas>

.grid {
  display: grid;
  grid-template-columns: repeat(4, calc(25vw - 8rpx * 3 / 4));
  gap: 8rpx;
  /* grid-template-rows:repeat( 200rpx); */
  grid-auto-rows: calc(25vw - 8rpx * 3 / 4);
}
// index.js
// 获取应用实例
import {
  createScopedThreejs
} from '../../utils/threejs-miniprogram/index'

const app = getApp()

Page({
  data: {
    "audioUrl": "https://dl.stream.qqmusic.qq.com/C4000026gMkr1L9tKN.m4a?guid=7840612806&vkey=4696DBD6111F38EC3D75D0F4FDF769B3736068A795CF585BA18DF3A4BCD8F83D0662B54D70F64934AF74A34C9A354AFF0AA4660CCE4C66F0&uin=&fromtag=120032",
    videoUrl: "https://yun-live.oss-cn-shanghai.aliyuncs.com/video/20201225/2EbJdXN5ZG.mp4",
    videoInfo: {},
    canvasWidth: 0,
    canvasHeight: 0,
    fps: 0,
    duration: 0,
    imageList: [],
    videoPath: "",
    exportSrc:""
  },
  // 事件处理函数
  onLoad() {
    // this.startRecorder()
    // return
    this.videoDecoderStart()
  },
  async videoDecoderStart() {
    var {
      tempFilePath
    } = await wx.chooseVideo()
    console.log(tempFilePath)
    var {tracks} = await this.getAbort(tempFilePath)
    this.audioTrack = tracks.find(ele => ele.kind == 'audio')
    // wx.showLoading({
    //   title: '加载中...',
    // })
    // var {
    //   tempFilePath
    // } = await this.getTempPath(this.data.videoUrl)
    // wx.hideLoading()
    var videoInfo = await wx.getVideoInfo({
      src: tempFilePath,
    })
    // wx.compressVideo({
    //   bitrate: 0,
    //   fps: 0,
    //   quality: 'low',
    //   resolution: 0,
    //   src: 'src',
    // })
    this.setData({
      videoInfo: videoInfo,
      canvasWidth: videoInfo.orientation == 'left' || videoInfo.orientation == 'right' ? videoInfo.height : videoInfo.width,
      canvasHeight: videoInfo.orientation == 'left' || videoInfo.orientation == 'right' ? videoInfo.width : videoInfo.height,
      duration: videoInfo.duration,
      fps: videoInfo.fps
    }, () => {
      // 创建视频解析器
      this.videoDecoder = wx.createVideoDecoder()
      const {
        canvas,
        context
      } = this.initOffscreenCanvas(this.data.canvasWidth, this.data.canvasWidth)
      this.videoDecoder.on("start", () => {
        this.videoDecoder.seek(0)
        this.timer = setInterval(() => {
          this.getFrameData(canvas, context)
        }, 300);
      })
      this.videoDecoder.on("seek", () => {})
      this.videoDecoder.on("stop", () => {})
      this.videoDecoder.start({
        source: tempFilePath,
        abortAudio: true
      })
    })
  },
  getFrameData(canvas, context) {
    var func = (() => {
      var imageVideoData = this.videoDecoder.getFrameData()
      if (imageVideoData) {
        if (this.timer) {
          clearInterval(this.timer)
          this.timer = null
        }
        context.clearRect(0, 0, this.data.canvasWidth, this.data.canvasHeight)
        var imgData = context.createImageData(this.data.canvasWidth, this.data.canvasHeight)
        var clampedArray = new Uint8ClampedArray(imageVideoData.data)
        for (var i = 0; i < clampedArray.length; i++) {
          imgData.data[i] = clampedArray[i];
        }
        context.putImageData(imgData, 0, 0)
        this.data.imageList.push(canvas.toDataURL())
        this.setData({
          imageList: this.data.imageList
        }, () => {
          canvas.requestAnimationFrame(func)
        })
      }
    })
    func(canvas, context)

  },

  getTempPath(url) {
    return new Promise((r, j) => {
      wx.downloadFile({
        url: url,
        success: r,
        fail: j
      })
    })
  },
  // 创建一个离屏画布,因为只是单纯的绘制图片,就不要预览了只需要看
  initOffscreenCanvas(canvasWidth, canvasHeight) {
    const canvas = wx.createOffscreenCanvas({
      type: '2d',
      width: canvasWidth,
      height: canvasHeight
    })
    return {
      canvas,
      context: canvas.getContext('2d')
    }
  },
  initVideoRecorder(webglCanvas, fps, duration) {
    return wx.createMediaRecorder(webglCanvas, {
      fps: fps,
      duration: duration * 1000
    })
  },
  getAbort(source) {
    return new Promise(async relsove => {
      this.abortMediaContainer = this.abortMediaContainer ? this.abortMediaContainer : wx.createMediaContainer()
      this.abortMediaContainer.extractDataSource({
        source,
        success: relsove
      })
    })
  },
  async startRecorder() {
    this.webglCanvas = await this.getWebglCanvas('target1')
    this.render = await this.drawWebGLCanvas(this.webglCanvas)
    this.videoRecorder = this.initVideoRecorder(this.webglCanvas, this.data.fps, this.data.duration)
    await new Promise(resolve => {
      this.videoRecorder.on('start', resolve)
      this.videoRecorder.start()
    })
    //
    for (let i = 0; i <  this.data.imageList.length; i++) {
      // 
      await this.render(this.data.imageList[i])
      await this.sleep(10)
      await new Promise(r => this.videoRecorder.requestFrame(r))
    }
    const {
      tempFilePath: videoPath
    } = await this.videoRecorder.stop()
    this.setData({
      videoPath
    }, async () => {
      console.log(111)
     var {tracks} = await this.getAbort(this.data.videoPath)
     var videoTracks = tracks.find(ele => ele.kind == 'video')
     if(videoTracks && this.audioTrack ){
       this.abortMediaContainer.addTrack(videoTracks)
       this.abortMediaContainer.addTrack(this.audioTrack)
       console.log(this.audioTrack)
      //  console.log(videoTracks.slice(0,10000))
      //  console.log(this.audioTrack.slice(0,10000))
      this.abortMediaContainer.export({
        success:async  (res) => {
       var {tracks} = await this.getAbort(res.tempFilePath)
          console.log(tracks)
          this.setData({
            exportSrc: res.tempFilePath
          })
        }
       })
     }
    })
    console.log(2222)

    this.videoRecorder.destroy()
  },
  getWebglCanvas(id) {
    return new Promise((resolve) => {
      this.createSelectorQuery()
        .select('#' + id)
        .node(res => resolve(res.node))
        .exec();
    });
  },
  sleep(s){
    return new Promise((r) => {
      setTimeout(() => {
r()
      },s)
    })
 
  },
  async drawWebGLCanvas(canvas, tempFilePath) {
    var that = this
    const THREE = createScopedThreejs(canvas)
    var camera, scene, renderer;
    var mesh;
    camera = new THREE.PerspectiveCamera(70, canvas.width / canvas.height, 1, 4000);
    camera.position.set(0, 0, 400);
    scene = new THREE.Scene();
    renderer = new THREE.WebGLRenderer({
      antialias: false
    });
    renderer.setPixelRatio(1);
    renderer.setSize(canvas.width, canvas.height);
    // SphereGeometry
    // BoxBufferGeometry
    var geometry = new THREE.BoxBufferGeometry(200, 200, 200);
    // material.needsUpdate = true

    // material.map = texture
    // geometry.needsUpdate = true;
    var material = new THREE.MeshBasicMaterial({
      // map: texture
    });
    material.needsUpdate = true
    geometry.needsUpdate = true;
    mesh = new THREE.Mesh(geometry, material);
    scene.add(mesh);
    return async function render(imgData) {
      var texture = await new Promise(resolve => new THREE.TextureLoader().load(imgData, resolve));
      texture.minFilter = THREE.LinearFilter
      material.map = texture
      mesh.rotation.x += 0.005;
      mesh.rotation.y += 0.1;
      // await this.sleep(200)
      renderer.render(scene, camera);
      // await that.webglCanvas.requestAnimationFrame(render.bind())
    }
  },
})
  • 8
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 31
    评论
微信小程序裁剪视频的具体实现代码如下: 1. 在 wxml 文件中添加 video 组件和两个按钮,用于开始和结束裁剪。 ``` <video id="myVideo" src="{{src}}" controls></video> <button type="primary" bindtap="startCutting">开始裁剪</button> <button type="primary" bindtap="endCutting">结束裁剪</button> ``` 2. 在 js 文件中监听按钮的点击事件,获取 video 组件的上下文和视频总时长,并在开始裁剪时记录当前时间,结束裁剪时计算裁剪后的时长。 ``` data: { src: '', // 视频地址 ctx: null, // video 组件的上下文 duration: 0, // 视频总时长 startTime: 0, // 裁剪开始时间 endTime: 0 // 裁剪结束时间 }, onLoad() { this.ctx = wx.createVideoContext('myVideo', this); }, onReady() { this.ctx.pause(); // 加载后暂停播放 this.ctx.duration((res) => { this.duration = res.duration; }); }, startCutting() { this.ctx.play(); // 开始播放 this.startTime = this.ctx.currentTime; // 记录开始时间 }, endCutting() { this.ctx.pause(); // 暂停播放 this.endTime = this.ctx.currentTime; // 记录结束时间 const cutDuration = this.endTime - this.startTime; // 计算裁剪后的时长 console.log(`裁剪后的时长为 ${cutDuration}`); } ``` 3. 在 wxss 文件中设置 video 的样式。 ``` video { width: 100%; height: 300rpx; } ``` 注意:微信小程序裁剪视频需要用户授权选择视频文件,并且需要在项目配置中开启对应的权限。具体实现方法可以参考微信小程序官方文档。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 31
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值