基于LM的(GB28181/EHOME)设备语音指挥(对讲、广播)

简介

LM视频中间音频对讲和广播,支持一对一、一对多、多对多、控制前端播放自定wav语音文件及创建语音聊天室等功能,可基于基础安防设备实现实时多方语音通话、语音喊话及报警联动语音播放等场景下语音交互。

LM视频中间件支持的设备如下:

  1. 全系列海康设备
  2. 全系列大华设备
  3. GB28181设备(内网UDP/TCP,外网TCP模式)
  4. EHOME协议设备,包含ISUP
  5. LM主要对外开放接口有,开启对讲(广播)、关闭对讲(广播)、发送语音和发送wav文件播放等,详细API说明见API说明文档

提示

需设备上安装了麦克风和喇叭

API接口说明

开启对讲(广播)
此接口为开启或加入对讲(组),其中请求参数

  • deviceId 可以为空时,为用户发起新建(加入)对讲组,本次请求不添加新的设备到对讲组中
  • ssid为0时,代表新建对讲(广播)组,当ssid不为0时为加入已存在的对讲组,ssid对讲组必须要存在,否则会返回对讲组不存在的错误

注:

  • 用户创建对讲组后,若在一段时间内(15秒)不调用发送语音或者播放wav文件接口,LM将自动释放当前对讲组,同时关闭和前端设备之间的对讲会话。
  • 如对讲的设备已被对讲组占用,LM会在接口返回User Already Opening错误,调用者可以在接口返回的数组详情中得到占用设备得对讲组ssid,从而再发起加入对讲组得请求进行语音通讯。

请求

POST /api/v1/talk/start?token=f384932b-92b9-4947-a567-10c33a82b44f HTTP/1.1
Content-Type: application/json
Accept: */*
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Content-Length: 63
    
{
	"ssid":0,
    "deviceId":[
		"A20221214135512",
		"A123445"
	]
}

应答
返回的信息需要到detail数组中获取每个设备的开启结果 

HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 177
Connection: keep-alive
Access-Control-Allow-Origin: *
    
{
	"result": 200,
	"message": "OK",
	"ssid": 1,
	"detail": [
		{
			"deviceId": "A20221214135512",
			"result": "OK"
		},
		{
			"deviceId": "A123445",
			"result": "Invalid Device ID"
		}
	]
}
关闭对讲(广播)

此接口为关闭对讲组接口,使用时也可以不调用此接口,LM也会自动检测是否还存在用户存在对讲组中,接口参数

  • deviceId为空时关闭参数ssid指定的对讲组
  • ssId为非0且LM中必须存在此对讲组

请求

    POST /api/v1/talk/stop?token=8305918c-69e8-4280-ae22-85aa5d696e0e HTTP/1.1
    Content-Type: application/json
    Accept: */*
    Host: 192.168.3.23:9030
    Accept-Encoding: gzip, deflate, br
    Connection: keep-alive
    Content-Length: 20
    
    {
        "ssid":636
    }

 !!! sample “应答”

    HTTP/1.1 200 OK
    Content-Type: application/json
    Content-Length: 42
    Connection: keep-alive
    Access-Control-Allow-Origin: *
    
    {
        "result": 200,
        "message": "OK"
    }
发送语音

此接口是在打开对讲接口调用成功后返回后,客户端使用websocket发送本地及采集的PCM语音数据,语音采样率:44000, 采样位为16, 单通道。

播放wav文件

本接口仅支持WAV格式的PCM语音文件

网页H5音频采集

网页音频采用使用的是HTML5的getUserMedia API,getUserMedia API为用户提供访问硬件设备媒体(摄像头、视频、音频、地理位置等)的接口,基于该接口,开发者可以在不依赖任何浏览器插件的条件下访问硬件媒体设备。getUserMedia API的浏览器兼容行如图示:

注意
需要网页H5支持采集音频有以下两种方式(以Chrome浏览器为例子):

  • 服务器端开启https
  • 浏览本地设置例外,进入浏览器本地设置
  1. 地址栏输入chrome://flags/
  2. 在设置页面找到选项Insecure origins treated as secure,添加例外服务器地址,重启浏览器即可在开发环境采集音频
采集音频
 initTalk(){// 初始化多媒体对象
      if (!navigator.mediaDevices.getUserMedia) {
        alert('浏览器不支持音频输入');
      } else if(this.record == null){
        navigator.mediaDevices.getUserMedia = navigator.mediaDevices.getUserMedia || navigator.mediaDevices.webkitGetUserMedia;
        navigator.mediaDevices.getUserMedia({audio: true})
        .then((mediaStream)=> {
          this.record = new this.getRecorder(mediaStream)
        })
      }
    }
getRecorder(stream){
      let sampleBits = 16;//输出采样数位 8, 16
      let sampleRate = 44100;//输出采样率
      let bufSize = 8192;
      let context = new AudioContext();
      let audioInput = context.createMediaStreamSource(stream);
      let recorder = context.createScriptProcessor(0, 1, 1);
      let audio_resample = new Resampler(context.sampleRate, sampleRate, 1, bufSize);
      console.log(audio_resample);
      let audioData = {
        size: 0,          //录音文件长度
        buffer: [],    //录音缓存
        inputSampleRate: sampleRate,    //输入采样率
        inputSampleBits: 16,      //输入采样数位 8, 16
        outputSampleRate: sampleRate,
        oututSampleBits: sampleBits,
        clear: function () {
          audioData.buffer = [];
          audioData.size = 0;
        },
        input: function (data) {
          audioData.buffer.push(new Float32Array(data));
          audioData.size += data.length;
        },
        compress: function () { //合并压缩
          //合并
          let data = new Float32Array(this.size);
          let offset = 0;
          for (let i = 0; i < this.buffer.length; i++) {
              data.set(this.buffer[i], offset);
              offset += this.buffer[i].length;
          }
          //压缩
          let compression = parseInt(this.inputSampleRate / this.outputSampleRate);
          let length = data.length / compression;
          let result = new Float32Array(length);
          let index = 0, j = 0;
          while (index < length) {
              result[index] = data[j];
              j += compression;
              index++;
          }
          return result;
        },
        encodePCM: function () {//这里不对采集到的数据进行其他格式处理,如有需要均交给服务器端处理。
          let sampleRate = Math.min(this.inputSampleRate, this.outputSampleRate);
          let sampleBits = Math.min(this.inputSampleBits, this.oututSampleBits);
          let bytes = this.compress();
          let dataLength = bytes.length * (sampleBits / 8);
          let buffer = new ArrayBuffer(dataLength);
          let data = new DataView(buffer);
          let offset = 0;
          for (let i = 0; i < bytes.length; i++, offset += 2) {
              let s = Math.max(-1, Math.min(1, bytes[i]));
              data.setInt16(offset, s < 0 ? s * 0x8000 : s * 0x7FFF, true);
          }
          return new Blob([data]);
        }
      };
      this.start = function () {
        audioInput.connect(recorder);
        recorder.connect(context.destination);
      }
      this.stop = function () {
        recorder.disconnect();
      }
      this.getBlob = function () {
        return audioData.encodePCM();
      }
      this.clear = function () {
        audioData.clear();
      }
      recorder.onaudioprocess = function (e) {
        //audioData.input(e.inputBuffer.getChannelData(0));
        audioData.input(audio_resample.resample(e.inputBuffer.getChannelData(0)));
      }
    }
 getTalkStart(){
	  this.initTalk()
      if(this.talkOpen){ // this.talkOpen为true开启对讲
        let data = {
          deviceId: [this.orgId],
          ssid: 0
        };
        talkStartServe(data,this.getToken()).then(res => {
          if(res.result==200 && res.ssid != 0) {
            this.ssid = res.ssid
            this.imgSrc = require('@/assets/img/yj-open.png')
            this.$message.success('设备对讲已开启')
            this.talkOpen = false
            this.sendTalk()
          }else {
            if(res.detail[0] != undefined){
                if(res.detail[0].result == "User Opened Talking"){
                    this.ssid = res.detail[0].ssid
                    this.imgSrc = require('@/assets/img/yj-open.png')
                    this.$message.success('设备对讲已开启')
                    this.talkOpen = false
                    this.sendTalk()
                }
                else{
                   this.$message.error(res.detail[0].result)
                   this.imgSrc = require('@/assets/img/yunjing.png')
                   this.talkOpen = true
                }
            }
            else{
              this.$message.error("对讲打开失败");
              this.imgSrc = require('@/assets/img/yunjing.png')
              this.talkOpen = true
            }
            
          }
        })
      }else { // 关闭对讲
        if(this.record != null){
            this.record.stop();
        }
        
        if(this.socket != null){
            this.socket.close();
        }

        if(this.pcmPlayer != null){
           this.pcmPlayer.destroy();
           this.pcmPlayer = null;
        } 

        //不需要关闭,后台会根据连接自动关闭对讲组
        this.$message.success('设备对讲已关闭')
        this.imgSrc = require('@/assets/img/yunjing.png')
        this.talkOpen = true
        
        /*talkStopServe({ssid: this.ssid},this.getToken()).then(res => {
           if(res.result==200) {
            this.$message.success('设备对讲已关闭')
            this.imgSrc = require('@/assets/img/yunjing.png')
            this.talkOpen = true
          }else {
            this.$message.error('操作失败')
            this.imgSrc = require('@/assets/img/yj-open.png')
            this.talkOpen = false
          }
        })*/
      }
    }

 发送音频

 sendTalk() {// WebSocket发送音频
    let that = this // this指向会有问题that代替
    let locUrl = window.location.host
    let stemp = '';
    if(window.location.protocol === 'https:')
				{
					stemp ='s';
				}
      const socketUrl = 'ws' + stemp +'://'+ locUrl + '/ws_talk/'+this.ssid+'?token='+this.getToken();// + '&sampleRate=' + this.context.sampleRate;
      that.socket = new WebSocket(socketUrl)
      that.socket.binaryType = 'arraybuffer'
      that.socket.onopen = function () {
        console.log('浏览器WebSocket已打开');
        if(that.record != null){
              that.record.start();
        }
        else
        {
          console.log('声音采集打开失败');
        }    
        window.timeInte = setInterval(() => {
          if(that.socket != null && that.socket.readyState == 1) {
            if (that.record != null && that.record.getBlob().size != 0) {
              that.socket.send(that.record.getBlob());    //发送音频数据
              that.record.clear();
            }
            
          }
        },20)
      };
      if(that.pcmPlayer == null)
      {
          that.pcmPlayer = new PCMPlayer({
                                            inputCodec: 'Int16',
                                            channels: 1,
                                            sampleRate: 16000,
                                            flushTime: 200
                                            })
      }
      that.socket.onmessage = function(msg) {// 接收WebSocket消息
         that.pcmPlayer.feed(msg.data)
      }
    }

常见错误码

交流联系:

杭州厚航科技有限公司http://houhangkeji.com/

QQ技术交流群:698793654

 

  • 11
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值