1. No cross validation
ref:
http://stackoverflow.com/questions/16927964/how-to-calculate-precision-recall-and-f-score-with-libsvm-in-python
from sklearn import svm
from sklearn import metrics
from sklearn.cross_validation import train_test_split
from sklearn.datasets import load_iris
# prepare dataset
iris = load_iris()
X = iris.data[:, :2]
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# svm classification
clf = svm.SVC(kernel='rbf', gamma=0.7, C = 1.0).fit(X_train, y_train)
y_predicted = clf.predict(X_test)
# performance
print "Classification report for %s" % clf
print
print metrics.classification_report(y_test, y_predicted)
print
print "Confusion matrix"
print metrics.confusion_matrix(y_test, y_predicted)
This example would output f1. if you need accuracy, using the code below
metrics.accuracy_score(y_test, y_predicted)2. Cross Validation
head is the same with the example above, after we have X and y,
clf2 = svm.SVC(kernel='linear', gamma=0.7, C = 1.0) # build a svm classifier
scores = cross_validation.cross_val_score(clf2, X, y, cv = 5) # calculate the accuracy
scores2 = cross_validation.cross_val_score(clf2, X, y, cv = 5, scoring = 'f1') # calculate the f1
print 'clf2 res:\n'
print scores.mean()
print 'clf2 f1:\n'
print scores2.mean()