You're on the right track. To add support for a new datatype,
you first write a fileformat plugin which understands how
to parse the file type. Then you write a rendering plugin
which knows how to take the packets from the fileformat plugin
and "render" them. In the case of audio, "rendering" would
mean decoding the packets and writing the decoded audio to
the audio services interfaces.
These are usually the questions I ask when adding
new fileformat and renderer plugins:
Did the fileformat get invoked when you tried to
play the sample files? If so, did it produce packets and were
those packets received by the renderer plugin? Both of these questions
can be answered by just running in a debugger.
Eric
you first write a fileformat plugin which understands how
to parse the file type. Then you write a rendering plugin
which knows how to take the packets from the fileformat plugin
and "render" them. In the case of audio, "rendering" would
mean decoding the packets and writing the decoded audio to
the audio services interfaces.
These are usually the questions I ask when adding
new fileformat and renderer plugins:
Did the fileformat get invoked when you tried to
play the sample files? If so, did it produce packets and were
those packets received by the renderer plugin? Both of these questions
can be answered by just running in a debugger.
Eric