scripts:
(1) Batch extract all the feature sets using OpenSmile:
perl stddirectory_smileextract.pl /home/sudan/Desktop/research/MusicGenreClassification/genres_16k_mono_wav/ emobase.conf out.arff
(2) Using bextract to do the prediction (with all the feature sets in bextract)
bextract -sv -timbral -spfe -chroma -sfm -scf -lsp -lpcc genres10.mf -tc testdata_2.mf -w temp.arff
(3)Using bextract to do the prediction (with only the timbral feature sets in bextract)(3)
bextract -sv -timbral genres10.mf -tc testdata_2.mf -w temp.arff
Even though the recognition results seems better for (3) over (2), it is still not make so much sense at all.
(4) add a plain line betweent the test mf file awk '{print $1 "\n"}' testdata_2.mf >testdata_2_double.mf
(5) USing weka's SMO to do the prediction:
java -Xms1024m weka.classifiers.functions.SMO -t /home/sudan/Desktop/research/MusicGenreClassification/features_all_temp.arff -c last -T /home/sudan/Desktop/research/MusicGenreClassification/testdata_3.arff -p 0 -o > pred.out