getusermedia
It seems like not so long ago every browser had the Flash plugin to get access to the devices media hardware to capture audio and video, with the help of these plugins, developers were able to get access to the audio and videos devices to stream and display live video feed on the browser.
似乎不久以前,每个浏览器都具有Flash插件来访问设备媒体硬件以捕获音频和视频,在这些插件的帮助下,开发人员能够访问音频和视频设备以流式传输和显示浏览器上的实时视频供稿。
It all got easier when HTML5 was introduced, for developers and users alike. With HTML5 came the introduction of APIs that had access to device hardware, some of the introduced APIs in HTML5 is the MediaDevices. This API provides access to media input devices like audio, video etc. This object contains the getUserMedia method we’ll be working with.
对于开发人员和用户而言,引入HTML5时,一切都变得更加容易。 随着HTML5的出现,可以访问设备硬件的API的引入,HTML5中引入的一些API是MediaDevices 。 该API提供对音频,视频等媒体输入设备的访问。此对象包含我们将要使用的getUserMedia方法。
什么是getUserMedia API ( What’s the getUserMedia API )
The getUserMedia API makes use of the media input devices to product a MediaStream, this MediaStream contains the requested media types whether audio or video. Using the stream returned from the API, video feeds can be displayed on the browser which is useful realtime communication on the browser. When used alongside the MediaStreamRecorder API, we can record and store media data captured on the browser. This API only works on secure origins like the rest of the newly introduces APIs, it works nonetheless on localhost and on file urls.
getUserMedia API使用媒体输入设备生成MediaStream ,此MediaStream包含请求的媒体类型,无论是音频还是视频。 使用从API返回的流,可以在浏览器上显示视频供稿,这对于浏览器上的实时通信很有用。 与MediaStreamRecorder API一起使用时,我们可以记录和存储在浏览器中捕获的媒体数据。 该API仅适用于安全来源,就像其他新引入的API一样,它仍然适用于localhost和文件url。
入门 ( Getting started )
Let’s walk through the steps from requesting permission to capture video data to displaying live feed from the input device on the browser. First, we have to check if the intending user’s browser supports the `mediaDevices` API. This API exists within the navigator interface, this interface contains the current state and identity of the user agent. This is how the check is performed:
让我们逐步介绍从请求捕获视频数据的权限到在浏览器上显示来自输入设备的实时供稿的步骤。 首先,我们必须检查目标用户的浏览器是否支持`mediaDevices` API。 该API存在于导航器界面中,该界面包含用户代理的当前状态和身份。 这是执行检查的方式:
if('mediaDevices' in navigator && 'getUserMedia' in navigator.mediaDevices){
console.log("Let's get this party started")
}
First we check if the `mediaDevices` API exists within the `navigator` and then checking if the `getUserMedia` API is available within the `mediaDevices`. If this returns `true`, we can get started.
首先我们检查“导航器”中是否存在“ mediaDevices” API,然后检查“ mediaDevices”中是否存在“ getUserMedia” API。 如果返回true,则可以开始。
请求用户权限 ( Requesting user permission )
The next step after confirming support on the browser for `getUserMedia` is to request for permission to make use of the media input devices on the user agent. Typically, after a user grants permission, a `Promise` is returned which resolves to a media stream, this Promise isn’t returned when the permission is denied by the user, which blocks access to these devices.
确认浏览器对getUserMedia的支持后,下一步是请求允许使用用户代理上的媒体输入设备。 通常,在用户授予许可后,将返回一个“ Promise”,它会解析为媒体流,但当用户拒绝该许可(阻止访问这些设备)时,不会返回此Promise。
if('mediaDevices' in navigator && 'getUserMedia' in navigator.mediaDevices){
const stream = await navigator.mediaDevices.getUserMedia({video: true})
}
The object provided as an argument for the `getUserMedia` method is called `constraints`, this determines which of the media input devices we are requesting permissions for, if the object contained `audio: true`, the user will be asked to grant access to the audio input device.
作为getUserMedia方法的参数提供的对象称为“约束”,它确定我们正在请求哪些媒体输入设备的权限,如果该对象包含“ audio:true”,则将要求用户授予访问权限到音频输入设备。
配置媒体限制 ( Configuring media constraints )
The constrains object is a MediaStreamConstraints object that specifies the types of media to request and the requirements of each media type. Using the `constraints` object, we can specify requirements for the requested stream like the resolution of the stream anto use (`front`, `back`).
约束对象是一个MediaStreamConstraints对象,它指定要请求的媒体类型以及每种媒体类型的要求。 使用constraints对象,我们可以为请求的流指定需求,例如流使用的分辨率(front,back)。
A media type must be provided when requesting a media type, either `video` or `audio`, a `NotFoundError` will be returned if the requested media types can’t be found on the user’s browser. If we intend to request a video stream of `1280 x 720` resolution, we’ll can update the constraints object to look like this:
请求媒体类型(视频或音频)时必须提供媒体类型,如果在用户浏览器中找不到所请求的媒体类型,则将返回NotFoundError。 如果我们打算请求分辨率为1280 x 720的视频流,则可以将constraints对象更新为如下形式:
{
video: {
width: 1280,
height: 720,
}
}
With this update, the browser will try to match this quality settings for the stream, but if the video device can’t deliver this resolution, the browser will return other resolutions available. To ensure that the browser returns a resolution not lower than the one provided we have to make use of the `min` property. Update the constraints object to include the `min` property:
通过此更新,浏览器将尝试匹配流的此质量设置,但是如果视频设备无法提供此分辨率,则浏览器将返回其他可用的分辨率。 为了确保浏览器返回的分辨率不低于所提供的分辨率,我们必须使用`min`属性。 更新约束对象以包含`min`属性:
{
video: {
width: {
min: 1280,
},
height: {
min: 720,
}
}
}
This will ensure that the stream resolution will returned will be at least `1280 x 720`. If this minimum requirement can’t be met, the promise will be rejected with an `OverconstrainedError`.
这将确保返回的流分辨率至少为1280 x 720。 如果不能满足此最低要求,则将通过`OverconstrainedError`拒绝承诺。
Sometimes, you’re concerned about data saving and you need the stream to not exceed a set resolution. This can come in handy when the user is on a limited plan. To enable this functionality, update the constraints object to contain a `max` field:
有时,您担心数据的保存,并且需要流不超过设置的分辨率。 当用户的计划有限时,这可能会派上用场。 要启用此功能,请更新约束对象以包含一个“ max”字段:
{
video: {
width: {
min: 1280,
max: 1920,
},
height: {
min: 720,
max: 1080
}
}
}
With these settings, the browser will ensure that the return stream doesn’t go below `1280 x 720` and doesn’t exceed `1920 x 1080`. Other terms that can be used includes `exact` and `ideal`. That `ideal` setting is typically used alongside the `min` and `max` properties to find the best possible setting that is closest to the ideal values provided.
通过这些设置,浏览器将确保返回流不会低于1280 x 720并且不会超过1920 x 1080。 可以使用的其他术语包括“精确”和“理想”。 该“理想”设置通常与“最小”和“最大”属性一起使用,以找到最接近所提供理想值的最佳设置。
You can update the constraints to use the `ideal` keyword:
您可以更新约束以使用`ideal`关键字:
{
video: {
width: {
min: 1280,
ideal: 1920,
max: 2560,
},
height: {
min: 720,
ideal: 1080,
max: 1440
}
}
}
To tell the browser to make use of the front or back (on mobile) camera on devices, you can specify a `facingMode` property in the `video` object:
为了告诉浏览器在设备上使用前置或后置(移动)摄像头,可以在video对象中指定“ faceingMode”属性:
{
video: {
width: {
min: 1280,
ideal: 1920,
max: 2560,
},
height: {
min: 720,
ideal: 1080,
max: 1440
},
facingMode: 'user'
}
}
This setting will make use of the front facing camera at all times in all devices, to make use of the back camera on mobile devices, we can alter the `facingMode` property to `environment`.
此设置将始终在所有设备中使用前置摄像头,要在移动设备上使用前置摄像头,我们可以将“ faceingMode”属性更改为“ environment”。
{
video: {
...
facingMode: {
exact: 'environment'
}
}
}
使用enumerateDevices方法 ( Using the enumerateDevices method )
This method when called, returns all the available input media devices available on the user’s PC.
With the method, you can provide the user options on which input media device to use for streaming audio or video content. This method returns a Promise resolved to a MediaDeviceInfo array containing information about each device.
调用此方法时,将返回用户PC上所有可用的输入媒体设备。
使用该方法,您可以为用户提供在哪个输入媒体设备上用于流式传输音频或视频内容的选项。 此方法将Promise返回解析为包含有关每个设备信息的MediaDeviceInfo数组。
An example of how to make a use of this method is show in the snippet below:
下面的代码段显示了如何使用此方法的示例:
async function getDevices(){
const devices = await navigator.mediaDevices.enumerateDevices();
}
A sample response for each of the devices would look like:
每个设备的样本响应如下:
{
deviceId: "23e77f76e308d9b56cad920fe36883f30239491b8952ae36603c650fd5d8fbgj",
groupId: "e0be8445bd846722962662d91c9eb04ia624aa42c2ca7c8e876187d1db3a3875",
kind: "audiooutput",
label: "",
}
**Note:** A label won’t be returned unless an available stream is available, or the user has granted device access permissions.
**注意:**除非有可用的流,或者用户已授予设备访问权限,否则不会返回标签。
在浏览器上显示视频流 ( Displaying video stream on browser )
We’ve gone through the process of requesting and getting access to the media devices, configured constraints to include required resolutions and also selected the camera we need to record video. After going through all these steps, we’ll at least want to see if the stream is delivering based on the configured settings. To ensure this, we’ll make use of the `video` element to display the video stream on the browser.
我们已经完成了请求和访问媒体设备的过程,配置了约束以包括所需的分辨率,还选择了我们录制视频所需的摄像机。 完成所有这些步骤之后,我们至少要根据配置的设置查看流是否正在传递。 为了确保这一点,我们将使用`video`元素在浏览器上显示视频流。
Like we said earlier in the article, the `getUserMedia` method returns Promise which can be resolved to a stream. The returned stream can be converted to an object URL using the createObjectURL method, this URL will be set as video source.
就像我们在本文前面所说的,`getUserMedia`方法返回Promise,可以将其解析为流。 可以使用createObjectURL方法将返回的流转换为对象URL,该URL将被设置为视频源。
We’ll create a short demo where we let the user choose from their available list of video devices. using the enumerateDevices method. This is a `navigator.mediaDevices` method, it lists the available media devices like microphones, cameras etc. It returns a Promise resolvable to an array of objects detailing the available media devices.
我们将创建一个简短的演示,让用户从可用的视频设备列表中进行选择。 使用enumerateDevices方法。 这是一种“ navigator.mediaDevices”方法,它列出了可用的媒体设备,例如麦克风,相机等。它向可详细描述可用媒体设备的对象数组返回一个Promise。
Create an `index.html` file and update the contents with the code below:
创建一个“ index.html”文件,并使用以下代码更新内容:
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport"
content="width=device-width, user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css">
<link rel="stylesheet" href="style.css">
<title>Document</title>
</head>
<body>
<div>
<video autoplay></video>
<canvas class="d-none"></canvas>
</div>
<div class="video-options">
<select name="" id="" class="custom-select">
<option value="">Select camera</option>
</select>
</div>
<img class="screenshot-image" alt="">
<div class="controls">
<button class="btn btn-danger play" title="Play"><i data-feather="play-circle"></i></button>
<button class="btn btn-info pause d-none" title="Pause"><i data-feather="pause"></i></button>
<button class="btn btn-outline-success screenshot d-none" title="ScreenShot"><i data-feather="image"></i></button>
</div>
<script src="https://unpkg.com/feather-icons"></script>
<script src="script.js"></script>
</body>
</html>
In the snippet above, we’ve setup the elements we’ll need and a couple of controls for the video. Also included, is a button for taking screenshots of the current video feed. Now let’s style up these components a bit.
在上面的代码段中,我们设置了所需的元素以及视频的几个控件。 还包括一个用于获取当前视频提要的屏幕截图的按钮。 现在让我们对这些组件进行一些样式设置。
Create a `style.css` file and the following styles into it, if you noticed, Bootstrap was included to reduce the amount of CSS we need to write to get the components going.
创建一个“ style.css”文件,并在其中添加以下样式,如果您注意到了,Bootstrap被包括在内以减少我们为使组件运行而需要编写CSS数量。
// style.css
.screenshot-image {
width: 150px;
height: 90px;
border-radius: 4px;
border: 2px solid whitesmoke;
box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.1);
position: absolute;
bottom: 5px;
left: 10px;
background: white;
}
.display-cover {
display: flex;
justify-content: center;
align-items: center;
width: 70%;
margin: 5% auto;
position: relative;
}
video {
width: 100%;
background: rgba(0, 0, 0, 0.2);
}
.video-options {
position: absolute;
left: 20px;
top: 30px;
}
.controls {
position: absolute;
right: 20px;
top: 20px;
display: flex;
}
.controls > button {
width: 45px;
height: 45px;
text-align: center;
border-radius: 100%;
margin: 0 6px;
background: transparent;
}
.controls > button:hover svg {
color: white !important;
}
@media (min-width: 300px) and (max-width: 400px) {
.controls {
flex-direction: column;
}
.controls button {
margin: 5px 0 !important;
}
}
.controls > button > svg {
height: 20px;
width: 18px;
text-align: center;
margin: 0 auto;
padding: 0;
}
.controls button:nth-child(1) {
border: 2px solid #D2002E;
}
.controls button:nth-child(1) svg {
color: #D2002E;
}
.controls button:nth-child(2) {
border: 2px solid #008496;
}
.controls button:nth-child(2) svg {
color: #008496;
}
.controls button:nth-child(3) {
border: 2px solid #00B541;
}
.controls button:nth-child(3) svg {
color: #00B541;
}
.controls > button {
width: 45px;
height: 45px;
text-align: center;
border-radius: 100%;
margin: 0 6px;
background: transparent;
}
.controls > button:hover svg {
color: white;
}
After styling, if you open the html file in your browser, you should get a view similar to the screenshot below:
样式化之后,如果在浏览器中打开html文件,则应获得类似于以下屏幕截图的视图:
The next step, is to add functionality to the demo, using the `enumerateDevices` method, we’ll get the available video devices and set it as the options within the select element. Create a file `script.js` and update it with the following snippet:
下一步是使用enumerateDevices方法向演示添加功能,我们将获取可用的视频设备并将其设置为select元素中的选项。 创建一个文件script.js并使用以下代码片段对其进行更新:
feather.replace();
const controls = document.querySelector('.controls');
const cameraOptions = document.querySelector('.video-options>select');
const video = document.querySelector('video');
const canvas = document.querySelector('canvas');
const screenshotImage = document.querySelector('img');
const buttons = [...controls.querySelectorAll('button')];
let streamStarted = false;
const [play, pause, screenshot] = buttons;
const constraints = {
video: {
width: {
min: 1280,
ideal: 1920,
max: 2560,
},
height: {
min: 720,
ideal: 1080,
max: 1440
},
}
};
const getCameraSelection = async () => {
const devices = await navigator.mediaDevices.enumerateDevices();
const videoDevices = devices.filter(device => device.kind === 'videoinput');
const options = videoDevices.map(videoDevice => {
return `<option value="${videoDevice.deviceId}">${videoDevice.label}</option>`;
});
cameraOptions.innerHTML = options.join('');
};
play.onclick = () => {
if (streamStarted) {
video.play();
play.classList.add('d-none');
pause.classList.remove('d-none');
return;
}
if ('mediaDevices' in navigator && navigator.mediaDevices.getUserMedia) {
const updatedConstraints = {
...constraints,
deviceId: {
exact: cameraOptions.value
}
};
startStream(updatedConstraints);
}
};
const startStream = async (constraints) => {
const stream = await navigator.mediaDevices.getUserMedia(constraints);
handleStream(stream);
};
const handleStream = (stream) => {
video.srcObject = stream;
play.classList.add('d-none');
pause.classList.remove('d-none');
screenshot.classList.remove('d-none');
streamStarted = true;
};
getCameraSelection();
In the snippet above there are a couple of things going on, let’s break them down:
在上面的代码段中,发生了几件事,让我们分解一下:
- `feather.replace()`: this method call instantiates [feather](https://feathericons.com/\), a great icon-set for web development. `feather.replace()`:此方法调用实例化[feather]( https://feathericons.com/ \ ),这是Web开发的绝佳图标集。
- The `constraints` variable holds the initial configuration for the stream. This will be extended to include the media device the user chooses. 约束变量保存流的初始配置。 这将扩展为包括用户选择的媒体设备。
- `getCameraSelection`: this function calls the `enumerateDevices` method, then, we filter through the array from the resolved Promise and select video input devices. From the filtered results, we create options for the `select` element. getCameraSelection:此函数调用enumerateDevices方法,然后,我们从解析的Promise中筛选数组并选择视频输入设备。 从过滤的结果中,我们为`select`元素创建选项。
- Calling the `getUserMedia` method happens within the `onclick` listener of the `play` button. Here we check if this method is supported by the user’s browser before starting the stream. 调用“ getUserMedia”方法发生在“播放”按钮的“ onclick”侦听器内。 在这里,我们在启动流之前检查用户浏览器是否支持此方法。
- Next, we call the `startStream` function that takes a `constraints` argument. It calls the `getUserMedia` method with the provided `constraints` . `handleStream` is called using the stream from the resolved promise, this method sets the returned stream to the video element’s `srcObject`. 接下来,我们调用带有参数Constraints的startStream函数。 它使用提供的`constraints`调用`getUserMedia`方法。 使用来自已解决的Promise的流调用`handleStream`,此方法将返回的流设置为视频元素的`srcObject`。
Next, we’ll add click listeners to the button controls on the page to `pause`, `stop` and takes `screenshots`. Also, we’ll add a listener to the `select` element to update the stream constraints with the selected video device.
接下来,我们将点击侦听器添加到页面上的按钮控件中,以“暂停”,“停止”并获取“屏幕截图”。 另外,我们将在“ select”元素中添加一个侦听器,以更新所选视频设备的流约束。
Update the `script.js` file with the code below:
使用以下代码更新`script.js`文件:
...
const startStream = async (constraints) => {
...
};
const handleStream = (stream) => {
...
};
cameraOptions.onchange = () => {
const updatedConstraints = {
...constraints,
deviceId: {
exact: cameraOptions.value
}
};
startStream(updatedConstraints);
};
const pauseStream = () => {
video.pause();
play.classList.remove('d-none');
pause.classList.add('d-none');
};
const doScreenshot = () => {
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
canvas.getContext('2d').drawImage(video, 0, 0);
screenshotImage.src = canvas.toDataURL('image/webp');
screenshotImage.classList.remove('d-none');
};
pause.onclick = pauseStream;
screenshot.onclick = doScreenshot;
Now, when you open the `index.html` file on the browser, clicking the `play` button should start the stream.
现在,当您在浏览器中打开`index.html`文件时,单击`play`按钮将启动流。
Here is a complete demo:
这是一个完整的演示:
结论 ( Conclusion )
This article has introduced the `getUserMedia` API, an interesting addition to the web that eases the process of capturing media on the web. The API takes a parameter ( `constraints` ) that can be used to configure the get access to audio and video input devices, it can also be used to specify the video resolution required for your application. You can extend the demo further to give the user an option to save the screenshots taken, as well as recording and storing video and audio data with the help of MediaStreamRecorder API. Happy hacking.
本文介绍了getUserMedia API,这是对Web的有趣补充,它简化了在Web上捕获媒体的过程。 API带有一个参数(“ constraints”),该参数可用于配置对音频和视频输入设备的访问权限,也可用于指定应用程序所需的视频分辨率。 您可以进一步扩展该演示,以使用户可以选择保存所截取的屏幕截图,以及在MediaStreamRecorder API的帮助下记录和存储视频和音频数据。 骇客入侵。
翻译自: https://scotch.io/tutorials/front-and-rear-camera-access-with-javascripts-getusermedia
getusermedia