DirectShow开发者问与答

原创 2001年07月04日 20:40:00

 DirectShow 开发者问与答

bluearrow.gifIAsyncReader是作什么用的?如何使用它? CPullPin是什么?
bluearrow.gif在DirectShow里面停止、暂停、运行这三个状态有什么不同?
bluearrow.gif一个典型的图形过滤器有几个线程?
bluearrow.gifg_Templates是做什么用的? 为什么我的程序必须连接它?
bluearrow.gif过滤器为什么注册不成功?
bluearrow.gif怎样回放活动视频流或者网络视频流?
bluearrow.gif在程序中怎样使用DirectX Audio Plug-ins?
redarrow.gifWhat is IAsyncReader for and how can I use it? What is CPullPin?

Most streams in DirectShow use a requested push model. That is, you tell the upstream pin where it should start and stop, and it then delivers you that section of the stream in a sequence of samples delivered to your Receive method on its thread. This works fine for most playback scenarios, but there are some cases where it is not very efficient. One such case is the AVI file format: this is not really designed well for sequential playback, since the chunks you need are often in the wrong place. To play this back efficiently, you need to be able to read more at random in the file, rather than just parsing chunks as they arrive as you would with MPEG for example.

In order to extract the file reading parts of both the MPEG parser and the AVI parser into a common filter (so that for example it could be replaced by a URL filter) it was necessary to have an interface at this point that was efficient in both cases. This is the IAsyncReader interface. Using this interface, each sample is requested separately with a random access style. You can either have the requests fulfilled synchronously on your own thread (SyncRead) or queue them and collect the completed requests later (Request and Wait).

CPullPin provides the client code for this interface for a downstream pin, and really is designed to let the input pin look the same whether it uses IAsyncReader or the normal push-model IMemInputPin::Receive. If you connect to an output pin that uses IMemInputPin::Receive, the data will be sent to your Receive method. If you connect to an output pin that only supports IAsyncReader, then CPullPin provides the 'glue' that pulls data from the output pin and delivers it to your Receive method. It doesn't really get any of the benefit of being able to read data from different positions in the stream. Accordingly you will not be surprised to learn that the mpeg parser uses CPullPin (in sync mode) whereas the AVI file parser uses the IAsyncReader interface in async mode, but using its own code to access discontiguous chunks independently.

In sync mode, the async reader just reads the file. When you call IAsyncReader::SyncReadAligned, it does an unbuffered ReadFile call and then returns. CPullPin in this mode just has a loop that gets a buffer, calls the sync-read method and then calls the (overridden) Receive method to get it to your pin.

In async mode you call Request to queue a request, which is passed to a worker thread within the async reader. At some point, you can then call WaitForNext to get a completed buffer back from the async reader. Win95 does not support overlapped i/o, and hence I don't believe the file reader uses overlapped i/o in the Windows NT case either even when it is available.

CPullPin in async mode has a loop that queues a request and waits for a completed request, but always keeping one request queued. So in this case, the only difference between the two modes is that in async mode, the next request will be sent to the disk almost immediately, whereas in sync mode, the next request is not queued to the disk until your Receive method has completed (in most of the standard parsers, this is very quick).

So in a high bandwidth case where you need to queue the next request before completing the Receive processing, you may want to consider async mode, but in other cases, you are ok in sync mode. If you want to receive data from many discontiguous parts of the file, you probably want to discard CPullPin entirely and write your own access code, but for most streaming cases, sync mode with CPullPin gives you what you want.

It's also worth noting that the DirectShow MPEG splitters use IAsyncReader methods to access parts of the file at pin-connection time: IMemInputPin would not provide data until the graph was running (see discussion below on live material)

redarrow.gifWhat's the difference between stopped, paused and running states in DirectShow?

In a DirectShow graph, there are three possible states: stopped, paused and running. If a graph is stopped, filters hold a minimum set of resources and do not process data. If the graph is running, filters hold a maximal set of resources and data is processed and rendered.

If a graph is paused then data is processed but not rendered. Source filters will start (or continue) to push data into the graph and transform filters will process the data but the renderer will not draw or write the data. The transform filter will eventually block since its buffers are not being consumed by the renderer. Thus a graph can be cued for rapid starting simply by leaving it in paused mode: once some data arrives at the renderer, the rest of the graph will be blocked by the normal flow control mechanisms (GetBuffer or Receive will block) until the graph is run.

Media types that can be rendered statically, such as video (where you can show a frozen frame) but not, for example, audio, are typically rendered in paused state. Repainting the video window can simply be achieved by transitioning the graph to paused mode and back.

redarrow.gifHow many threads are there in a typical filter graph?

DirectShow filter graphs typically have one thread per stream segment -- that is, one thread would be active on a source filter's output pin and would push data right through the transform filters and into the renderer. The transform function and delivery downstream would normally occur on this thread during the Receive call. A parser or other filter that has more than one output pin would typically have a new thread for each output pin (plus the source filter's thread on which it receives data). This can be done straightforwardly using a COutputQueue class in each output pin.

In addition, there will typically be a filter graph manager thread and the application's thread

redarrow.gifWhat is g_Templates and why won't my application link without it?

The DirectShow base class library strmbase.lib provides a COM class factory implementation to simplify the development of filters as COM objects. This class factory uses the template implementation you provide: your filter code will supply the templates for objects in your DLL in the g_Templates array, with a count of entries in g_cTemplates. If you are building a filter, you will need to supply this array and count.

If, however, you are writing an application that uses the DirectShow base classes, you do not need to define this -- you can set g_cTemplates to 0 and then g_Templates will not be referenced.

If your DLL is a COM dll that uses another class factory implementation (perhaps your dll is a ATL control) then you will be exporting DllGetClassFactory from your DLL. This is implemented in a number of places, including the ATL and MFC libraries as well as DirectShow's library. Make sure that the correct library is found first otherwise even if you define g_cTemplates you will not be able to create any of your ATL objects.

redarrow.gifThe DLL registers OK but the wrong module is registered for my filters.

You need to make sure that your DLL starts at DllEntryPoint, defined in dllentry.cpp in the class library. Otherwise the class library's global variable g_hInst is not set to the correct module handle, and registration will go wrong.

redarrow.gifHow can I play back MPEG streams from live sources or the network?

You will need a source filter that introduces the mpeg data into the graph. You will probably also need an MPEG splitter (also called a parser) that works with live streams and you will also need a decompressor if you are using MPEG-2 (the MPEG filter supplied with DirectShow supports only MPEG-1).

The DirectShow-supplied splitters for MPEG-1 and 2 do two tasks in addition to separating out the individual elementary streams: they convert the embedded PTS times into DirectShow timestamps, and they create a media type including the sequence header. Both of these involved searching around in the file at pin-connection time, which is clearly not possible for live sources.

The best solution is to create a new splitter filter, based on the sample code in the DirectShow SDK (under mpegparse). Your source filter would supply data using the IMemInputPin transport rather than IAsyncReader (see the discussion of IAsyncReader above). The splitter filter would look for the first pts and first sequence header after any discontinuity, and would use these for the media type and as a base for timestamp conversion. A default media type would be needed at connection time, to be replaced by a dynamically detected media type when the first sequence header is detected.

redarrow.gifHow can I host DirectX Audio Plug-ins in my application?

The right way to do this, I believe, is to create a source and sink filter as part of your application and connect the plugin between them. When you call the source filter from your application, it delivers the data to the plugin (a DirectShow transform filter) and the processed data arrives at your sink filter when done, from where it goes back to your application.

Construct the two filters as C++ objects by calling new (rather than registering them with class ids and calling CoCreateInstance). That way, you can call public methods on the objects without needing to define a custom COM interface to get access to them. They still need to be COM objects of course (in particular, don't forget to initialise the refcount to 1 after creating them, and delete them by Release rather than delete).

The source filter is derived from CBaseFilter and has a single output pin derived from CBaseOutputPin. When the app wants to deliver data, you call GetDeliveryBuffer, fill with data, and then Deliver (both these methods are on the output pin.

The sink filter is also derived from CBaseFilter and has a single input pin based on CBaseInputPin. The data arrives at the Receive method of the input pin from where you can either call out to your app or wait for your app to collect it.

Note that the delivery of data from the plug-in to your sync filter is not necessarily tied to the delivery of data from your source into the plug-in. Most transforms (eg most of those written to the Sonic Foundry template) will do the transform synchronously: during your source filter's Deliver call, the data will be processed and delivered to the sink filter, but there is no guarantee that this is the case (or that, eg, the same size and count of buffers are used). A simple approach used by some hosts is to only support 'synchronous' plug-ins: call Deliver with the data, then call the sink filter to pick up the processed data and if it hasn't arrived, complain that the plug-in is not supported. I don't like this idea because it is unnecessarily restrictive on design of transform filters (eg you couldn't do mpeg decode like that). A better idea is to have an async model where the sink filter calls back to your app when the data arrives.

Other things to worry about are:

  • Negotiation of size and count of buffers used (separate on input to and output from transform)
  • Instantiation of filter graph manager and building the three-filter graph
  • Finding the right transform by enumeration using IFilterMapper2 from CLSID_FilterMapper2
  • Property pages using OleCreatePropertyFrame are modal, so you either mess about with separate threads for the property page or you have your own property frame to host the property pages.
  • Storing the filter's property settings using IPersistStream and eg CreateStreamOnHGLOBAL.
  • Timestamping the samples somehow in the source filter in case the transform uses this.

 

什么是DirectX,DirectShow与DirectX有什么区别?

在介绍同三维万能高清视频采集卡和全能音视频解码编码器软件等多媒体软件时,我们多次提到DirectShow、DirectX,那么什么是DirectShow?什么是DirectX,DirectShow与D...
  • wishfly
  • wishfly
  • 2015年10月17日 23:44
  • 986

一个清华学子写的关于directshow的学习心得【转】

学习DirectShow有一段时间了,把这段学习过程中翻译出来的SDK与大家分享,同时也希望专家们指出我理解上的错误,万分感谢。 1. DirectShow介绍     DirectShow是一个wi...
  • wishfly
  • wishfly
  • 2015年11月18日 10:30
  • 9822

c++ DirectShow播放任意格式的视频

利用opencv只能处理.avi的视频,opencv之前的版本之前试过好像是只能处理.avi未压缩版本的视频,未压缩过的视频相当大,一个文件大概是几十个G。(这个不确定,因为之前用的压缩过的.avi的...
  • KUAILE123
  • KUAILE123
  • 2013年09月08日 16:29
  • 2213

DirectShow 开发环境搭建(整理)

directshow sdk 开发32位程序,用GRMSDK_EN_DVD.iso, 开发64位程序,用GRMSDKX_EN_DVD.iso。 找到Samples\Multimedia\Di...
  • 91program
  • 91program
  • 2014年05月20日 16:16
  • 1775

最简单的基于DirectShow的示例:视频播放器自定义版

本文记录一个简单的基于DirectShow的自定义的视频播放器。这里所说的“自定义播放器”,实际上指的是自己在Filter Graph中手动逐个添加Filter,并且连接这些Filter的后运行的播放...
  • leixiaohua1020
  • leixiaohua1020
  • 2015年01月11日 18:05
  • 10027

DirectSHOW中的视频捕捉 (乱七八糟整理)

DIRECTSHOW中的视频捕捉 -------------------------DirectSHOW 真不好玩............ 本篇文档主要描述关于用Directsho...
  • yulinxx
  • yulinxx
  • 2015年11月10日 11:12
  • 2505

DirectShow 学习笔记

DirectX(简称:DX)是微软推出的一套基于Windows系统的多媒体应用程式接口APIs函式。在开发中,DX分为两个部分,一个是运行库,通过DX编译出来的程式必须要有运行库的支持,另外一个是开发...
  • shishuo365
  • shishuo365
  • 2015年07月16日 23:11
  • 900

DirectShow Filter 开发典型例子分析 ——字幕叠加 (FilterTitleOverlay)1

本文分析一下《DirectShow开发指南》中的一个典型的Transform Filter的例子:字幕叠加(FilterTitleOverlay)。通过分析该例子,我们可以学习到DirectShow ...
  • leixiaohua1020
  • leixiaohua1020
  • 2013年10月09日 12:47
  • 10551

DirectShow中多个USB视频捕捉的连接问题

感谢作者,原文地址为:http://blog.sina.com.cn/s/blog_48720e900100rqmx.html, 如果造成任何不便,请联系我删除! 当进行多个USB视频捕捉时,不...
  • WLFF_CSDN
  • WLFF_CSDN
  • 2016年08月17日 08:51
  • 278

DirectShow摄像头采集

手头有个项目需要实现通过采集卡采集手机桌面,获取ROI区域图像,进而进行视频自动化评测。opencv采集性能太低,不满足60fps的要求,查了资料,据说DirectShow可以达到100+frame ...
  • gaoguide
  • gaoguide
  • 2015年08月23日 00:19
  • 4126
内容举报
返回顶部
收藏助手
不良信息举报
您举报文章:DirectShow开发者问与答
举报原因:
原因补充:

(最多只允许输入30个字)