ffmpeg实时传输视频_使用ffmpeg和DirectX 11流式传输视频

ffmpeg实时传输视频

A few months ago at work, I was tasked with developing a custom, low-latency video player. Prior to this, I had worked only briefly with FFmpeg and not at all with DirectX 11. But I figured it shouldn’t be that hard. FFmpeg is pretty popular, DirectX 11 has been around for a while now, and it’s not like I needed to create intense 3D graphics or anything (yet).

几个月前,在工作中,我的任务是开发定制的低延迟视频播放器。 在此之前,我工作过短暂只通过FFmpeg而不是在所有与DirectX 11,但我想它不应该很难。 FFmpeg非常受欢迎,DirectX 11已经存在了一段时间,这还不像我需要创建密集的3D图形或其他任何东西(尚未)。

Surely there would be tons of examples on how to do something basic like decode and render video, right?

当然会有大量的示例说明如何做一些基本的事情,例如解码和渲染视频,对吗?

Nope. Hence this article.

不。 因此,本文。

So that the next poor soul who needs to do this without experience in FFmpeg or DirectX 11 won’t have to bash their head into a wall just to spit out some video onto a screen.

这样,下一个不需要FFmpeg或DirectX 11经验就不需要这样做的可怜的灵魂,不必为了将一些视频吐到屏幕上而将头撞墙。

Okay. Just a few most housekeeping things before we get to the juicy stuff.

好的。 在我们获得多汁的东西之前,只需做一些最基本的家务事。

  • The code samples provided are very simplified. I’ve left out return code checking, error handling, and, well, a bunch of stuff. My point is that the code samples are just that: samples. (I would have provided more fleshed-out examples, but you know. Intellectual property and all that.)

    提供的代码示例非常简化。 我省去了返回码检查,错误处理以及很多东西。 我的观点是,代码样本就是:样本。 (我会提供更多充实的示例,但您知道。知识产权以及所有这些。)

  • I won’t cover the principles of hardware-accelerated video decoding/rendering because it’s a little outside of the scope of this article. Besides, there are plenty of other resources that explain it far better than I could.

    我将不介绍硬件加速视频解码/渲染的原理,因为这超出了本文的范围。 此外,还有很多其他资源可以比我更好地解释它。
  • FFmpeg supports pretty much all protocols and encoding formats. Both RTSP and UDP worked with these samples, as well as video encoded in H264 and H265. I’m sure tons of others will work, too.

    FFmpeg支持几乎所有协议和编码格式。 RTSP和UDP都可以使用这些样本,以及使用H264和H265编码的视频。 我敢肯定,其他很多人也会工作。
  • The project I created was CMake-based and doesn’t rely on Visual Studio’s build system (since we need to support non-DX renderers as well). It made things a tad more difficult, which is why I thought I’d mention it.

    我创建的项目基于CMake,并且不依赖Visual Studio的构建系统(因为我们也需要支持非DX渲染器)。 这使事情变得有点困难,这就是为什么我认为我会提到它。

Without further ado, let’s get started!

事不宜迟,让我们开始吧!

步骤#1:设置流源和视频解码器。 (Step #1: Set up the stream source and video decoder.)

This is pretty much exclusively FFmpeg stuff. Just a matter of setting up the format context, codec context, and all the other structs that FFmpeg needs you to. For the setup, I relied pretty heavily on this example and the source code from another project called Moonlight.

这几乎完全是FFmpeg的东西。 只需设置格式上下文,编解码器上下文以及FFmpeg需要的所有其他结构即可。 对于设置,我非常依赖此示例以及另一个名为Moonlight的项目的源代码。

Note that you have to provide the hardware device type in some way to the AVCodecContext. I opted to do this the same way the FFmpeg example does: a basic string.

请注意,您必须以某种方式向AVCodecContext提供硬件设备类型。 我选择这样做与FFmpeg示例相同:基本字符串。

// initialize streamconst std::string hw_device_name = "d3d11va";
AVHWDeviceType device_type = av_hwdevice_find_type_by_name(hw_device_name.c_str());// set up codec contextAVBufferRef* hw_device_ctx;
av_hwdevice_ctx_create(&hw_device_ctx, device_type, nullptr, nullptr, 0);
codec_ctx->hw_device_ctx = av_buffer_ref(hw_device_ctx);// open stream

Once the setup is done, the actual decoding is pretty straightforward; It’s just a matter of retrieving AVPackets from the stream source, and decoding them into AVFrames with th

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值