The never-ending story: GStreamer and hardware integration

This is the companion article for my talk at the GStreamer Conference 2013: The never-ending story: GStreamer and hardware integration

The topic of hardware integration into GStreamer (think of GPUs, DSPs, hardware codecs, OpenMAX ILOpenGLVAAPIvideo4linux, etc.) was always a tricky one. In the past, in the 0.10 release series, it often was impossible to make full and efficient use of special hardware features and many ugly hacks were required. In 1.x we reconsidered many of these problems and tried to adjust the existing APIs, or add new APIs to make it possible to solve these problems in a cleaner way. Although all these APIs are properly documented in detail, what’s missing is some kind of high-level/overview documentation to understand how it all fits together, which API to use when for what. That’s what I’m trying to solve here now.

What’s new in GStreamer 1.0 & 1.2

The relevant changes in GStreamer 1.x for hardware integration are the much better memory management and control with the GstMemory and GstAllocator abstractions and GstBufferPool, the support for arbitrary buffer metadata with GstMeta, sharing of device contexts between elements with GstContext and the reworked (re-)negotiation and (re-)configuration mechanisms, especially the new ALLOCATION query. In the following I will explain how to use them in the context of hardware integration and how they all work together and will assume that you already know and understand the basics of GStreamer.

Special memory

The most common problem when dealing with special hardware and the APIs to use it is, that it requires the usage of special memory. Maybe there are some constraints on the memory (e.g. regarding alignment or physical contiguous memory is required), or you need to use some special API to use the memory and it doesn’t behave like normal system memory. Examples for the second case would be VAAPI and OpenGL, where you have to use special APIs to do something with the memory, or the new DMA-BUF Linux kernel API, where you just pass around a file descriptor that might be mappable to the process’ virtual address space via mmap().

Custom memory types

In 0.10 special memory was usually handled by subclassing GstBuffer, and somehow ensuring that you could get a buffer of the correct type an it’s not copied to a normal GstBuffer in between. It was very unreliable and rather hackish usually.

In 1.0 it’s no longer possible to subclass GstBuffer, and GstBuffer is only a collection of memory abstraction objects and metadata. You would now implement a custom GstMemory and GstAllocator. GstMemory here is just representing an instance or handle to a specific memory but itself has no logic implemented. For the logic it would point to its corresponding GstAllocator. The reason for this split is mainly that a) there might be multiple allocators for a single memory type (think of the different ways of allocating an EGLImage), and b) that you want to negotiate the allocator and memory type between elements before allocating any of these memories.

This new abstraction of memory has explicit operations to request read, write or exclusive access to the memory and for usage like normal system memory requires an explicit call to gst_memory_map(). This mapping to system memory allows to use it from C like memory allocated via malloc(), but might not be possible for every type of memory. Subclasses of GstMemory could have to implement all these operations and map them to whatever this specific type of memory provides, and could provide additional API on top of that. The memory could also be marked as read-only, or could even be inaccessible for the process and just passed around as an opaque handle between elements.

Let me give some examples. For DMA-BUF you would for example implement the mapping to the virtual address space of the process via mmap() (which is done by GstDmabufMemory), for VAAPI you could implement additional API for the memory that could provide the memory as an GL texture id or convert it to a different color format (which is done by gst-vaapi), for memory that represents a GL texture id you could implement mapping to the process’ virtual address space by dispatching an operation to the GL thread and using glTexImage2D() or glReadPixels() and implement copying of memory in a similar way (which is done by gst-plugins-gl). You could also implement some kind of access control for the memory, which would allow easier implementation of DRM schemes (all engineers are probably dying a bit inside while reading this now…), or do crazy things like implementing some kind of file-backed memory.

With GstBuffer just being a collection of memories now, it provides convenience API to handle all the contained memories, e.g. to copy all of them, or map all of them to the process’ virtual address space.

Memory constraints

If all you need is just some memory that behaves like normal, malloc’d memory but fulfils some additional constraints like alignment or being physically contiguous you don’t need to implement your own GstMemory and GstAllocator subclass. The allocation function of GstAllocator, gst_allocator_alloc(), allows to provide parameters to specify such things and the default memory implementations handles this just fine.

For format specific constraints, like having a special rowstride of raw video frames, or having their planes in separate memory regions, something else can be done which I will mention later when talking about arbitrary buffer metadata.

So far I’m not aware of any special type of memory that can’t be handled with the new API, and I think it’s rather unlikely that there ever will be one. In the worst case it will just be a bit inconvenient to write the code.

Buffer pools

In many cases it will be useful to not just allocate a new block of memory whenever one is required, but instead keep track of memory in a pool and recycle memory that is currently unused. For example because allocations are just too expensive or because there is only a very limited number of memory blocks available anyway.

For this GstBufferPool is available. It provides a generic buffer pool class with standard functionality (setting of min/max number of buffers, pre-allocation of buffers, allocation parameters, allow/forbid dynamic allocation of buffers, selection of the GstAllocator, etc) and can be extended by subclasses with custom API and configuration.

As the most common case for buffer pools is for raw video, there’s also a GstVideoBufferPool subclass which already implements a lot of common functionality. For example it cares for allocating memory of the right size and caps handling, allows for configuration of custom rowstrides or using separate memory regions for the different planes and implements a custom configuration interface for specifying additional padding around each plane (GstVideoAlignment).

Arbitrary buffer metadata

In GStreamer 1.0 you can attach GstMeta to a buffer to represent arbitrary metadata, which can be used for all kinds of different things. GstMeta provides generic descriptions about the aspects of a buffer it applies to (tags) and functions to transform the meta (e.g. if a filter changes the aspect the meta applies too). The GstMeta supported by elements can also be negotiated between them to make sure all elements in a chain support a specific kind of metadata if it needs to care about it.

Examples for GstMeta usage would be the additional of simple metadata to buffers. Metadata like defining a rectangle with region of interests on a video frame (e.g. for face detection), or defining the gamma transfer function that applies to a video frame or defining custom rowstrides, or defining special downmix matrixes for converting 5.1 audio to stereo. While these are all examples that mostly apply to raw audio and video, it is also possible to implement metas that describe compressed formats, e.g. a meta that provides an MPEG sequence header as parsed C structures.

Apart from simple metadata it is also possible to use GstMeta for delayed processing, for example if all elements in a chain support a cropping metadata, it would be possible to not crop the video frame in the decoder but instead let the video sink handle the cropping based on a cropping rectangle put on the buffer as metadata. Or subtitles could be put on a buffer as metadata instead of overlaying them on top of the frame, and only in the sink after all other processing the subtitles would be overlaid on top of the video frame.

And then it’s also possible to use GstMeta to define “dynamic interfaces” on buffers. The metadata could for example contain a function pointer to a function that allows to upload the buffer to a GL texture, which could be simply implemented via glTexImage2() for normal memory or via special VAAPI for memory that is backed by VAAPI memory.

All the examples given in the previous paragraphs are already implemented by GStreamer and can just be re-used by other elements and we plan to add more generic GstMeta in the future. Look for GstVideoMeta, GstVideoCropMeta, GstVideoGLTextureUploadMeta, GstVideoOverlayMeta and GstAudioDownmixMeta for example.

Do not use GstMeta for memory specific information, e.g. as a lazy way to get around implementing a custom GstMemory and GstAllocator. It won’t work as optimal!

Negotiation

Compared to 0.10, negotiation between elements was extended by another step. There’s a) caps negotiation and b) allocation negotiation. Each of these happens before data-flow and after receiving a RECONFIGURE event.

caps negotiation

During caps negotiation the pads of the elements decide on one specific format that is supported by both linked pads and also make it possible for further downstream elements to decide on a specific format. Caps in GStreamer represent such a specific format and its properties, and is represented in a mime-type similar way with properties, e.g. “video/x-h264,width=1920,height=1080,profile=main”.

GstCaps can contain a list of such formats, represented as GstStructures, and each of these structures contain a name and a list of fields. The values of the fields can either be concrete, single values or describing multiple values (e.g. by using integer ranges). On top of caps many generic operations are defined to check compatibility of caps, to create intersections, to merge caps or to convert them into a single, fixed format (which is required during negotiation). If multiple structures are contained in a caps, they are ordered by preference.

All this is nothing new, but in GStreamer 1.2 caps were extended by GstCapsFeatures, which allows to add additional constraints to caps. For example a specific memory type can be requested or the existence of specific metadata. This is used during negotiation to make sure only compatible elements are linked (e.g. ones that can only work with EGLImage memory would not be seen as compatible with ones that only can handle VAAPI memory although everything else of the caps is the same), and is also used in autoplugging elements like playbin to select the most optimal decoder and sink combinations. An example of such caps with caps features would be “video/x-raw(memory:EGLImage),width=1920,height=1080” to specify that EGLImage memory is required.

So, the actual negotiation of the caps happens from upstream to downstream. The most downstream element uses a CAPS query, which replaces the getcaps function of pads from 0.10 and has an additional filter argument in 1.0. The filter argument is used to tell downstream which caps are actually supported by upstream and in which preference, so it don’t have to create unnecessary caps. This query is then propagated further downstream to the next elements, translated if necessary (e.g. from raw video caps to h264 caps by an encoder, while still proxying the values of the width/height fields), then answered by the most downstream element that doesn’t propagate the query further downstream and filled with the caps supported by this element’s pad. On the way back upstream these result caps are further narrowed down, translated between different formats and potentially reordered by preference. Once the results are received in the most upstream element, it tries to choose on specific format from those supported by downstream and itself, while keeping preferences of itself and downstream under consideration. Then as a last step it sends a CAPS event downstream to notify about the actually chosen caps. This CAPS event is then further propagated downstream by elements (if necessary translated) that don’t need further information to decide on their output format, or stored and later when all necessary information is available (e.g. after the header buffers are received and parsed) a new CAPS event is sent downstream with the new caps.

This is all very similar to how the same worked in 0.10.

allocation negotiation

The next step, the allocation negotiation, is something new in 1.0. The ALLOCATION query is used by every most upstream element that needs to allocate buffers (i.e. sources, converters, decoders, encoders, etc) to get information about the allocation possibilities from downstream.

The ALLOCATION query is created with the caps that the upstream element wants first answered by the most downstream element that by itself would allocate new buffers again for its own downstream (i.e. converters, decoders, encoders). It’s filled with any buffer pools that can be provided by downstream, allocators (and thus memory types), the minimum and maximum number of buffers, the allocation size, allocation parameters and the supported metas. On its way back upstream these results are filtered and narrowed down by the further upstream elements (e.g. to remove metas that are not supported by an element, or a memory type), and especially the minimum and maximum number of buffers is updated by each element. The querying element will then decide to use one of the buffer pools and/or allocators and which metas can be used and which not. It can also decide to not use any of the buffer pools or allocators and use its own that is compatible with the results of the query.

The usage of downstream buffer pools or allocators is the 1.0 replacement of gst_pad_alloc_buffer() in 0.10, and its much more efficient as the complete way downstream does not have to be walked for every buffer allocation but only once.

Note: the caps in the ALLOCATION query are not required to be the same as in the CAPS event. For example if the video crop metadata is used, the caps in the CAPS event will contain the cropped width and height while the caps in the ALLOCATION query will contain the uncropped (larger) width and height.

Context sharing

Another problem that often needs to be solved for hardware integration (but also other use cases), is the sharing of some kind of context between elements. Maybe even before these elements are linked together. For example you might need to share a VADisplay or EGLDisplay between your decoder and your sink, or an OpenGL context (plus thread dispatching functionality), or unrelated to hardware integration you might want to share an HTTP session (e.g. cookies) with multiple elements.

In GStreamer 1.2 support for doing this was added to GStreamer core via GstContext.

GstContext allows sharing such a context in a generic way via a name to identify the type of context and a GstStructure to store various information about it. It is distributed in the pipeline the following way: Whenever an element needs a specific context it first checks if one of the correct type was set on it before (for which the GstElement::set_context vfunc would’ve been called). Otherwise it will query downstream and then upstream with the CONTEXT query to retrieve any locally known contexts of the correct type (e.g. a decoder could get a context from a sink). If this doesn’t give any result either, the element is posting a NEED_CONTEXT message on the bus. This is then sent upwards in the bin hierarchy and if one of the containing bins knows a context of the correct type it will synchronously set it on the element. Otherwise the application will get the NEED_CONTEXT message and has the possibility to set a context from a sync bus handler on the element. If the element still has no context set, it can create one itself and advertise it in the bin hierarchy and to the application with the HAVE_CONTEXT message. When receiving this message, bins will cache the contexts and will use them the next time a NEED_CONTEXT message is seen. The same goes for contexts that were set on a bin, those will also be cached by the bin for future use.

All in all this does not require any interaction with the application, unless the application wants to configure which contexts are set on which elements.

To make the distributing of contexts more reliable, decodebin has a new “autoplug-query” signal, which can be used to forward CONTEXT (and ALLOCATION) queries of not-yet-exposed elements during autoplugging to downstream elements (e.g. sinks). This is also used by playbin to make sure elements can reach the sinks during autoplugging.

Open issues

Does this mean that GStreamer is complete now? No 

🙂

 There are still some open issues around this, but these should be fixable in 1.x without any problems. All the building blocks are there. Overall hardware integration is already working much much better than in 0.10.

Reconfiguration

One remaining issue is the handling of device reconfiguration. Often it is required to release all memory allocated from a device before being able to set a new configuration on the device and allocate new memory. The problem with this is that GStreamer elements currently have no control about downstream or upstream elements that use their memory, there is no API to request them to release memory. And even if there was it would be a bit tricky to integrate into 3rd party libraries like libav, as they keep references to memory internally for things like reference frames.

Related to this is that currently buffer pools only keep track of buffers, but there’s nothing that makes it easy to keep track of memories. Implementing this over and over again in GstAllocator subclasses is error-prone and inconvenient.

All discussions about this and some initial ideas are currently handling in Bugzilla #707534

Device probing

The other remaining issue is missing API for device probing, e.g. to get all available camera devices together with the GStreamer elements that can be used to access them. In 0.10 there was a very suboptimal solution for this, and it was removed for 1.0 without any replacement so far.

There is a patch with a new implementation of this in Bugzilla #678402. The new idea now is to implement a custom GstPluginFeature, that allows creating devices probing instances for elements from the registry (similar to GstElementFactory). This is planned to be integrated really soon now and should be in 1.4.

Summary

In practise this currently means that it’s possible to use gst-vaapi transparently in playbin, without the application having to know about anything VAAPI related. playbin will automatically select correct decoders and sinks and have them use in an optimal way together. This even works inside a WebKit HTML5 website together with fancy CSS3 effects.

gst-omx and the v4l2 plugin can now provide zerocopy operation, a slow ARM based device like the Raspberry Pi can decode HD video in realtime to EGLImages and display them, gst-plugins-gl has a solution to all OpenGL related threading problems, and many many more.

I think with what we have in GStreamer 1.2 we should be able to integrate all kinds of hardware in an optimal way and have it used transparently in pipelines. Some APIs might be missing, but there should be nothing that can’t be added rather easily now.

  • 5
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值