One hot encoding
one hot encoding creates new (binary) columns, indicating the presence of each possible value from the original data.
## explore the data type
print(train_data.dtypes)
## one hot encoding
one_hot_encoding_train_predictors = pd.get_dummies(train_predictors)
Scikit-learn is sensitive to the ordering of columns, so if a categorial has a different number of values in the training data vs the test data, the results will be nonsense.
To ensure the test data is encoded in the same manner as the training data with the align command:
one_hot_encoded_train_predictors = pd.get_dummies(train_predictors)
one_hot_encoded_test_predictors = pd.get_dummies(test_predictors)
final_train, final_test = one_hot_encoded_training_predictors.align(one_hot_encoded_test_predictors, join='left', axis=1)
## join = 'left': do the equivalent of SQL's left join.
Further learning
Pipelines: scikit-learn offer a class for one hot encoding
, and this can be added to a pipeline.
Applications To Text for Deep Learning: Keras
and TensorFlow
have fuctionality for one-hot encoding, which is useful for working with text.
Categoricals with Many Values: Scikit-learn’s FeatureHasher
uses the hashing trick
to store high-dimensional data. This will add some complexity to your modeling code.