2008-05-22
In preparation for my Luma media player, I wanted to create a simple audio player with visualization. Based upon what I have read, this would require a "tee" when using gstreamer. The tee in gstreamers is much like a tee in piped shell commands; data gets copied at the tee and travels in multiple directions. In gstreamer, one needs to create a queue for each new direction that the data will flow down in the pipeline.
It took me a while to wrangle with the tee requirements for handling queues. I could see how, but I couldn't understand why. So anyway, this is what I came up with:
1. a tee in the pipeline gets a name
2. the end of a queue gets declared as part of the tee, and is given the name of the tee followed by a period
3. add a queue to the gstreamer pipeline
4. the end of the queue thingy gets placed at the end of the queues ( this doesn't seem to be required for the last queue)
My gstreamer pipeline looks like this:
sweet! Now on to a my pythonic version using pygst
The big difference here, at least to me, is that the decodebin isn't really a bin, but it represents a series of possible bins. So if one where to select a vorbis file to play, the decodebin will determine the correct type of bin needed to handle the file and would create an instance of that type of bin, the same is true for wav,flac,aac,mp3, etc; all of which have a specific decoder that I don't want to have to figure out, so I let the decodebin do it for me. This line: self.decoder.connect("new-decoded-pad", self.onNewDecodedPad), will call a function whenever a new bin is created by the decoder bin and it is in the onNewDecodedPad function that the decodebin links to the rest of the pipeline. Does that make sense?
It took me a while to wrangle with the tee requirements for handling queues. I could see how, but I couldn't understand why. So anyway, this is what I came up with:
1. a tee in the pipeline gets a name
2. the end of a queue gets declared as part of the tee, and is given the name of the tee followed by a period
3. add a queue to the gstreamer pipeline
4. the end of the queue thingy gets placed at the end of the queues ( this doesn't seem to be required for the last queue)
My gstreamer pipeline looks like this:
#!/bin/sh
gst-launch
filesrc location=/path/to/audio/file
! decodebin ! audioconvert
! tee name=myT myT.
! queue ! autoaudiosink myT.
! queue ! goom ! ffmpegcolorspace ! autovideosink
gst-launch
filesrc location=/path/to/audio/file
! decodebin ! audioconvert
! tee name=myT myT.
! queue ! autoaudiosink myT.
! queue ! goom ! ffmpegcolorspace ! autovideosink
sweet! Now on to a my pythonic version using pygst
#!/usr/bin/env python import sys import gst import time class myPlayer (): def __init__(self): self.pipeline = gst.Pipeline() self.src = gst.element_factory_make("filesrc", "src") self.decoder = gst.element_factory_make("decodebin", "decoder") self.decoder.connect("new-decoded-pad", self.onNewDecodedPad) self.goom = gst.element_factory_make("goom") self.colorspace = gst.element_factory_make("ffmpegcolorspace","color") self.conv = gst.element_factory_make("audioconvert", "conv") self.vidsink = gst.element_factory_make("autovideosink","videosink") self.asink = gst.element_factory_make("autoaudiosink", "aoutput") self.tee = gst.element_factory_make('tee', "tee") self.queuea = gst.element_factory_make("queue", "queuea") self.queuev = gst.element_factory_make("queue", "queuev") self.pipeline.add(self.src,self.decoder,self.conv,self.tee,self.queuea) self.pipeline.add(self.asink,self.queuev,self.goom, self.colorspace, self.vidsink) gst.element_link_many(self.src,self.decoder) gst.element_link_many(self.conv,self.tee) self.tee.link(self.queuea) self.queuea.link(self.asink) self.tee.link(self.queuev) gst.element_link_many(self.queuev, self.goom,self.colorspace, self.vidsink) def onNewDecodedPad(self,decodebin, pad, islast): #link the pad to the converter decodebin.link(self.conv) def playfile(self,file): self.src.set_property('location', file) self.pipeline.set_state(gst.STATE_PLAYING) pipelinestate = self.pipeline.get_state() while pipelinestate[1] == gst.STATE_PLAYING: time.sleep(1) pipelinestate = self.pipeline.get_state() sys.exit() if __name__ == '__main__': if (len(sys.argv) > 1): file = sys.argv[1] player = myPlayer() player.playfile(file) else: print "you must select a tune"
The big difference here, at least to me, is that the decodebin isn't really a bin, but it represents a series of possible bins. So if one where to select a vorbis file to play, the decodebin will determine the correct type of bin needed to handle the file and would create an instance of that type of bin, the same is true for wav,flac,aac,mp3, etc; all of which have a specific decoder that I don't want to have to figure out, so I let the decodebin do it for me. This line: self.decoder.connect("new-decoded-pad", self.onNewDecodedPad), will call a function whenever a new bin is created by the decoder bin and it is in the onNewDecodedPad function that the decodebin links to the rest of the pipeline. Does that make sense?
Comments
Thanks for the catch! The code has been updated.
Thanks for the great example - I spent an hour trying to fit a pipeline for goom... You saved me :-)
Now this works on my...drumroll...Nokia N900! Maemo forever :-)
Now this works on my...drumroll...Nokia N900! Maemo forever :-)
That is awesome Tom! I've always wanted to how well gstreamer runs on ARM based systems.
转载于:https://blog.51cto.com/general/313484
"self.popeline.add" should be "self.pipeline.add"