we define an array “y”, and y1 is the one-hot encoder of y.
details about y and y1 can be seen in Spyder as follows(Note that if we do not use “.toarray()”, we will get a sparse matrix rather than a normal matrix):
when we compute the classification accuracy, we need to covert the one-hot vector to a column vector of labels and compares the predicted label vector and the actual vector.
y2= np.argmax(y1, axis=1) # y2 is the predicted label vector
a = (y2 == y.T) # y is the actual label vector, compare the predicted label vector and the actual vector.
accuracy = float(a.sum()) /len(y) # compute accuracy
Details about y2 and a can be seen in Spyder.
the size of y2
is (3, ), which is a special size. the size of y is (3,1).
One important thing, we need to use a = (y2 == y.T)
to compare the predicted label vector and the actual vector, namely we need to transpose the dimension of y
. If we use a1 = (y2 == y)
, then:
this is obviously wrong, thus, i think that the size (3, ) is a little similar to a row vector, but strangely, it cannot be transposed.
Y5=y2. T
in summary, the correct coding is as follows:
import numpy as np
from sklearn.preprocessing import OneHotEncoder
enc = OneHotEncoder() # one-hot encoder to convert the label to one-hot vector
y = np.array([[0],[1],[2]])
y1 = enc.fit_transform(y).toarray()
y2 = np.argmax(y1, axis=1)
a = (y2 == y.T)
accuracy = float(a.sum()) /len(y)
corresponding results are as follows: