getusermedia_getUserMedia API简介

getusermedia

In the mid-90s, chat was one of the best products available on the web. Raise your hand if you were young and thought how cool it would be to develop your own chat application. One of their best features was their ability to capture microphone audio and/or video from a webcam, and send it over the Internet. To implement these features, developers have relied on plugins like Flash and Silverlight for a long time. However, Flash and Silverlight can be a problem if you don’t have the proper permissions or you’re not tech-savvy. Today, such plugins aren’t required anymore thanks to the WebRTC project and its related APIs. This article will introduce the getUserMedia API, one of the APIs derived from the WebRTC project.

在90年代中期,聊天是网络上最好的产品之一。 如果您还年轻,请举手,并认为开发自己的聊天应用程序有多酷。 他们的最佳功能之一就是能够从网络摄像头捕获麦克风音频和/或视频,并通过Internet发送。 为了实现这些功能,开发人员长期以来一直依赖Flash和Silverlight等插件。 但是,如果您没有适当的权限或不懂技术,则Flash和Silverlight可能会成为问题。 今天,由于WebRTC项目及其相关API,不再需要此类插件。 本文将介绍getUserMedia API,这是从WebRTC项目派生的API之一。

什么是getUserMedia API (What’s the getUserMedia API)

The getUserMedia API provides access to multimedia streams (video, audio, or both) from local devices. There are several use cases for this API. The first one is obviously real-time communication, but we can also employ it to record tutorials or lessons for online courses. Another interesting use case is the surveillance of your home or workplace. On its own, this API is only capable of acquiring audio and video, not sending the data or storing it in a file. To have a complete working chat, for example, we need to send data across the Internet. This can be done using the RTCPeerConnection API. To store the data we can use the MediaStreamRecorder API.

使用getUserMedia API,可以从本地设备访问多媒体流(视频,音频或两者)。 此API有几种用例。 第一个显然是实时通信,但是我们也可以使用它来记录在线课程的教程或课程。 另一个有趣的用例是监视您的家庭或工作场所。 该API本身只能获取音频和视频,而不能发送数据或将其存储在文件中。 例如,要进行完整的工作聊天,我们需要通过Internet发送数据。 可以使用RTCPeerConnection API来完成。 要存储数据,我们可以使用MediaStreamRecorder API

The getUserMedia API is amazing for both developers and users. Developers can now access audio and video sources with a single function call, while users don’t need to install additional software. From the user perspective, this also means a decrease in the time to start using the feature, and also an increased use of the software by non tech-savvy people.

对于开发人员和用户,getUserMedia API都很棒。 开发人员现在可以通过一个函数调用访问音频和视频源,而用户则无需安装其他软件。 从用户的角度来看,这也意味着减少了开始使用该功能的时间,并且也增加了非技术人员对软件的使用。

Although the getUserMedia API has been around for a while now, as of December, 30th 2013 it’s still a W3C Working Draft. So, the specifications may be susceptible to several changes. The API exposes only one method, getUserMedia(), that belongs to the window.navigator object. The method accepts as its parameters an object of constraints, a success callback, and a failure callback. The constraints parameter is an object having either one or both the properties audio and video. The value of these properties is a Boolean, where true means request the stream (audio or video), and false does not request the stream. So, to request both audio and video, pass the following object.

尽管getUserMedia API已有一段时间了,但是截至2013年12月30 日,它仍然是W3C工作草案。 因此,规格可能会发生一些变化。 该API仅公开属于window.navigator对象的一种方法getUserMedia() 。 该方法接受约束对象,成功回调和失败回调作为其参数。 constraints参数是一个对象,具有audiovideo属性中的一个或两个。 这些属性的值是布尔值,其中true表示请求流(音频或视频),而false则不请求流。 因此,要请求音频和视频,请传递以下对象。

{
  video: true,
  audio: true
}

Alternatively, the value can be a Constraints object. This type of object allows us to have more control over the requested stream. In fact, we can choose to retrieve a video source at high resolution, for example 1280×720, or a low one, for example 320×180. Each Constraints object contains two properties, mandatory and optional. mandatory is an object that specifies the set of Constraints that the UA must satisfy or else call the errorCallback. optional, is an array of objects that specifies the set of Constraints that the UA should try to satisfy but may ignore if they cannot be satisfied.

或者,该值可以是Constraints对象 。 这种对象使我们可以更好地控制所请求的流。 实际上,我们可以选择以高分辨率(例如1280×720)或低分辨率(例如320×180)检索视频源。 每个Constraints对象都包含两个属性, mandatoryoptionalmandatory是一个对象,它指定UA必须满足的一组约束,否则将调用errorCallback。 optional ,是一个对象数组,它指定UA应该尝试满足但不能满足的约束集。

Let’s say that we want audio and video of the user, where the video must be at least at a high resolution and have a framerate of 30. In addition, if available, we want the video at a framerate of 60. To perform this task, we have to pass the following object.

假设我们需要用户的音频和视频,其中视频必须至少为高分辨率且帧速率为30。此外,如果可用,我们希望视频的帧速率为60。要执行此任务,我们必须传递以下对象。

{
  video: {
    mandatory: {
      minWidth: 1280,
      minHeight: 720,
      minFrameRate: 30
    },
    optional: [
      { minFrameRate: 60 }
    ]
  },
  audio: true
}

You can find more information on the properties available in the specifications.

您可以在规范中找到有关可用属性的更多信息。

The other two arguments to getUserMedia() are simply two callbacks invoked on success or failure, respectively. On success, the retrieved stream(s) are passed to the callback. The error callback is passed a MediaError object containing information on the error that occurred.

getUserMedia()的另外两个参数分别是分别在成功或失败时调用的两个回调。 成功后,检索到的流将传递到回调。 错误回调传递给MediaError对象 ,该对象包含有关发生的错误的信息。

浏览器兼容性 (Browser Compatibility)

The support for the getUserMedia API is decent on desktop but quite poor on mobile. Besides, the majority of the browsers that support it, still have the the vendor prefixed version. Currently, the desktop browsers that implement the API are Chrome 21+ (-webkit prefix), Firefox 17+ (-moz prefix), and Opera 12+ (unsupported from version 15 to 17) with some issues in older versions. On mobile browsers, only Chrome 21+ (-webkit prefix), and Opera 12+ (-webkit prefix from version 16) support the API. Also note that if a page containing the instructions to work with this API is opened through the file:// protocol in Chrome, it won’t work.

对getUserMedia API的支持在台式机上不错,但在移动设备上却很差。 此外,大多数支持它的浏览器仍带有供应商前缀版本。 当前,实现API的桌面浏览器是Chrome 21 +(-webkit前缀),Firefox 17 +(-moz前缀)和Opera 12+(版本15至17不支持),并且在旧版本中存在一些问题。 在移动浏览器中,仅Chrome 21 +(-webkit前缀)和Opera 12 +(-webkit前缀从版本16开始)支持该API。 另请注意,如果在Chrome中通过file://协议打开了包含使用此API的说明的页面,则该页面将无法工作。

The case of Opera is really interesting and deserves a note. This browser implemented the API but for an unknown (to me) reason, after the switch to the Blink rendering engine in version 15, they didn’t support it anymore. Finally, the API support was restored in version 18. As if it was not enough, Opera 18 is the first version to support the audio stream too.

Opera的情况确实很有趣,值得一提。 该浏览器实现了API,但出于未知的原因(对我而言),在切换到第15版的Blink呈现引擎之后,他们不再支持该API。 最终,API支持在版本18中得以恢复。似乎还不够,Opera 18也是第一个支持音频流的版本。

That said, we can ignore the compatibility issues thanks to a shim called getUserMedia.js. The latter will test the browser and if the API isn’t implemented, it fallbacks to Flash.

就是说,由于有一个名为getUserMedia.js的填充程序,我们可以忽略兼容性问题。 后者将测试浏览器,如果未实现API,则会回退到Flash。

演示版 (Demo)

In this section I’ll show you a basic demo so that you can see how the getUserMedia API works and concretely see its parameters. The goal of this demo is to create a “mirror”, in the sense that everything captured from the webcam and the microphone will be streamed via the screen and the audio speakers. We’ll ask the user for permission to access both multimedia streams, and then output them using the HTML5 video element. The markup is pretty simple. In addition to the video element, we have two buttons: one to start execution and one to stop it.

在本节中,我将向您展示一个基本演示,以便您可以了解getUserMedia API的工作原理,并具体查看其参数。 该演示的目标是创建一个“镜像”,从网络摄像头和麦克风捕获的所有内容都将通过屏幕和音频扬声器进行流传输。 我们将要求用户允许访问两个多媒体流,然后使用HTML5 video元素将其输出。 标记非常简单。 除了video元素外,我们还有两个按钮:一个开始执行,另一个停止。

Regarding the scripting part, we first test for browser support. If the API isn’t supported, we display the message “API not supported”, and disable the two buttons. If the browser supports the getUserMedia API, we attach a listener to the click event of the buttons. If the “Play demo” button is clicked, we test if we’re dealing with an old version of Opera because of the issues described in the previous section. Then, we request the audio and video data from the user’s device. If the request is successful, we stream the data using the video element; otherwise, we show the error that occurred on the console. The “Stop demo” button causes the video to be paused and the streams to be stopped.

关于脚本部分,我们首先测试浏览器支持。 如果不支持该API,我们将显示消息“不支持该API”,并禁用这两个按钮。 如果浏览器支持getUserMedia API,则将侦听器附加到按钮的click事件。 如果单击“播放演示”按钮,则由于上一节中描述的问题,我们将测试是否正在处理旧版本的Opera。 然后,我们从用户的设备请求音频和视频数据。 如果请求成功,我们将使用video元素传输数据; 否则,我们将显示控制台上发生的错误。 “停止演示”按钮使视频暂停并且流停止。

A live demo of the code below is available here.

此处提供以下代码的实时演示。

<!DOCTYPE html>
<html>
  <head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0"/>
    <title>getUserMedia Demo</title>
    <style>
      body
      {
        max-width: 500px;
        margin: 2em auto;
        font-size: 20px;
      }

      h1
      {
        text-align: center;
      }
         
      .buttons-wrapper
      {
        text-align: center;
      }

      .hidden
      {
        display: none;
      }

      #video
      {
        display: block;
        width: 100%;
      }

      .button-demo
      {
        padding: 0.5em;
        display: inline-block;
        margin: 1em auto;
      }

      .author
      {
        display: block;
        margin-top: 1em;
      }
    </style>
  </head>
  <body>
    <h1>getUserMedia API</h1>
    <video id="video" autoplay="autoplay" controls="true"></video>
    <div class="buttons-wrapper">
      <button id="button-play-gum" class="button-demo" href="#">Play demo</button>
      <button id="button-stop-gum" class="button-demo" href="#">Stop demo</button>
    </div>
    <span id="gum-unsupported" class="hidden">API not supported</span>
    <span id="gum-partially-supported" class="hidden">API partially supported (video only)</span>
    <script>
      var videoStream = null;
      var video = document.getElementById("video");

      // Test browser support
      window.navigator = window.navigator || {};
      navigator.getUserMedia = navigator.getUserMedia       ||
                               navigator.webkitGetUserMedia ||
                               navigator.mozGetUserMedia    ||
                               null;

      if (navigator.getUserMedia === null) {
        document.getElementById('gum-unsupported').classList.remove('hidden');
        document.getElementById('button-play-gum').setAttribute('disabled', 'disabled');
        document.getElementById('button-stop-gum').setAttribute('disabled', 'disabled');
      } else {
        // Opera <= 12.16 accepts the direct stream.
        // More on this here: http://dev.opera.com/articles/view/playing-with-html5-video-and-getusermedia-support/
        var createSrc = window.URL ? window.URL.createObjectURL : function(stream) {return stream;};

        // Opera <= 12.16 support video only.
        var audioContext = window.AudioContext       ||
                           window.webkitAudioContext ||
                           null;
        if (audioContext === null) {
          document.getElementById('gum-partially-supported').classList.remove('hidden');
        }

        document.getElementById('button-play-gum').addEventListener('click', function() {
          // Capture user's audio and video source
          navigator.getUserMedia({
            video: true,
            audio: true
          },
          function(stream) {
            videoStream = stream;
            // Stream the data
            video.src = createSrc(stream);
            video.play();
          },
          function(error) {
            console.log("Video capture error: ", error.code);
          });
        });
        document.getElementById('button-stop-gum').addEventListener('click', function() {
          // Pause the video
          video.pause();
          // Stop the stream
          videoStream.stop();
        });
      }
    </script>
  </body>
</html>

结论 (Conclusion)

This article has introduced you to the WebRTC project, one of most exciting web projects in recent years. In particular, this article discussed the getUserMedia API. The possibility of creating a real-time communication system using the browser only and very few lines of code is terrific and opens a lot of new opportunities.

本文向您介绍了WebRTC项目,这是近年来最激动人心的Web项目之一。 特别是,本文讨论了getUserMedia API。 仅使用浏览器和很少的代码行创建实时通信系统的可能性就非常好,并带来了许多新的机会。

As we’ve seen, the getUserMedia API is simple yet very flexible. It exposes just one method, but its first parameter, constraints, allows us to require the audio and video streams that better fit our application’s needs. The compatibility among browsers isn’t very wide, but it’s increasing, and this is good news! To better understand the concepts in this article, don’t forget to play with the provided demo. As a final note, I strongly encourage you to try to change the code to perform some task, for example applying a CSS filter to change how the video stream is shown.

如我们所见,getUserMedia API很简单,但是却非常灵活。 它仅公开一种方法,但是它的第一个参数constraints允许我们要求更好地满足应用程序需求的音频和视频流。 浏览器之间的兼容性不是很广泛,但是正在增加,这是个好消息! 为了更好地理解本文中的概念,请不要忘记使用提供的演示。 最后,我强烈建议您尝试更改代码以执行某些任务,例如,应用CSS过滤器来更改视频流的显示方式。

翻译自: https://www.sitepoint.com/introduction-getusermedia-api/

getusermedia

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值