Bangla Article Classification With TF-Hub

bangla_article_classifier

This Colab is a demonstration of using TensorFlow Hub for text classification in non-English/local languages. Here we choose Bangla as the local language and use pretrained word embeddings to solve a multiclass classification task where we classify Bangla news articles in 5 categories. The pretrained embeddings for Bangla comes from fastText which is a library by Facebook with released pretrained word vectors for 157 languages

We’ll use TF-Hub’s pretrained embedding exporter for converting the word embeddings to a text embedding module first and then use the module to train a classifier with tf.keras, TensorFlow’s high level user friendly API to build deep learning models. Even if we are using fastText embeddings here, it’s possible to export any other embeddings pretrained from other tasks and quickly get results with TensorFlow Hub

We will use BARD (Bangla Article Dataset) which has around 376226 articles collected from different Bangla news portals and labelled with 5 categories: economy, state, international, sports, and entertainment

Export pretrained word vectors to TF-Hub module:
TF-Hub provides some useful script for converting word embeddings to TF-Hub text embedding modules here. To make the module for Bangla or any other languages, we simply have to download the word embedding .txt or .vec file to the same directory as export_v2.py and run the script

The exporter reads the embedding vectors and exports it to a Tensorflow SavedModel. A SavedModel contains a complete TensorFlow program including weights and graph. TF-Hub can load the SavedModel as a module, which we will use to build the model for text classification. Since we are using tf.keras to build the model, we will use hub.KerasLayer, which provides a wrapper for a TF-Hub module to use a Keras Layer

Then, we will run the exporter script on our embedding file. Since fastText embeddings have a header line and are pretty large(around 3.3GB for Bangla after converting to a module) we ignore the first line and export only the first 100000 tokens to the text embedding module

The text embedding module takes a batch of sentences in a 1D tensor of strings as input and outputs the embedding vectors of shape (batch_size, embedding_dim) corresponding to the sentence. It preprocesses the input by splitting on spaces. Word embeddings are combined to sentence embeddings with the sqrtn combiner. For demonstration we pass a list of Bangla words as input and get the corresponding embedding vectors

Convert to TensorFlow Dataset:
Since the dataset is really large instead of loading the entire dataset in memory we will use a generator to yield samples in run-time in batches using TensorFlow Dataset functions. The dataset is also very imbalanced, so before using the generator, we will shuffle the dataset

To create a Dataset using a generator, we first write a generator function which reads each of the articles from file_paths and the labels from the label array, and yields one training example at each step. We pass this generator function to the tf.data.Dataset.from_generator method and specify the output types. Each training example is a tuple containing an article of tf.string data type and one-hot encoded label. We split the dataset with a train-validation split of 80-20 using tf.data.Dataset.skip and tf.data.Dataset.take methods

Model Training and Evaluation:
Since we have already added a wrapper around our module to use it as any other layer in Keras, we can create a small Sequential model which is a linear stack of layers. We can add our text embedding module with model.add just like any other layer. We compile the model by specifying the loss and optimizer and train it for 10 epochs. The tf.keras API can handle TensorFlow Datasets as input, so we can pass a Dataset instance to the fit method for model training. Since we are using the generator function, tf.data will handle generating the samples, batching them and feeding them to the model

We can get the predictions for the validation data and check the confusion matrix to see the model’s performance for each of the 5 classes. Because tf.keras.Model.predict method returns an n-d array for probabilities for each class, they can be converted to class labels using np.argmax

Compare Performance:
Now we can take the correct labels for the validation data from labels and compare them with our predictions to get to classification_report

The original authors described many preprocessing steps performed on the dataset, such as dropping punctuations and digits, removing top 25 most frequest stop words. As we can see in the classification_report, we also manage to obtain a 0.96 precision and accuracy after training for only 5 epochs without any preprocessing

In this example, when we created the Keras layer from our embedding module, we set the parameter trainable=False, which means the embedding weights will not be updated during training.

import os

import tensorflow as tf
import tensorflow_hub as hub

import gdown
import numpy as np
from sklearn.metrics import classification_report
import matplotlib.pyplot as plt
import seaborn as sns

# gdown.download(
#     url='https://drive.google.com/uc?id=1Ag0jd21oRwJhVFIBohmX_ogeojVtapLy',
#     output='bard.zip',
#     quiet=False
# )

module_path = "text_module"
embedding_layer = hub.KerasLayer(module_path, trainable=False)

embedding_layer(['বাস', 'বসবাস', 'ট্রেন', 'যাত্রী', 'ট্রাক'])

dir_names = ['economy', 'sports', 'entertainment', 'state', 'international']

file_paths = []
labels = []
for i, dir in enumerate(dir_names):
    file_names = ["/".join([dir, name]) for name in os.listdir(dir)]
    file_paths += file_names
    labels += [i] * len(os.listdir(dir))

np.random.seed(42)
permutation = np.random.permutation(len(file_paths))

file_paths = np.array(file_paths)[permutation]
labels = np.array(labels)[permutation]

train_frac = 0.8
train_size = int(len(file_paths) * train_frac)

# plot training vs validation distribution
plt.subplot(1, 2, 1)
plt.hist(labels[0:train_size])
plt.title("Train labels")
plt.subplot(1, 2, 2)
plt.hist(labels[train_size:])
plt.title("Validation labels")
plt.tight_layout()
plt.show()


def load_file(path, label):
    return tf.io.read_file(path), label


def make_datasets(train_size):
    batch_size = 256

    train_files = file_paths[:train_size]
    train_labels = labels[:train_size]
    train_ds = tf.data.Dataset.from_tensor_slices((train_files, train_labels))
    train_ds = train_ds.map(load_file).shuffle(5000)
    train_ds = train_ds.batch(batch_size).prefetch(tf.data.AUTOTUNE)

    test_files = file_paths[train_size:]
    test_labels = labels[train_size:]
    test_ds = tf.data.Dataset.from_tensor_slices((test_files, test_labels))
    test_ds = test_ds.map(load_file)
    test_ds = test_ds.batch(batch_size).prefetch(tf.data.AUTOTUNE)

    return train_ds, test_ds


train_data, validation_data = make_datasets(train_size)


def create_model():
    model = tf.keras.Sequential([
        tf.keras.layers.Input(shape=[], dtype=tf.string),
        embedding_layer,
        tf.keras.layers.Dense(64, activation="relu"),
        tf.keras.layers.Dense(16, activation="relu"),
        tf.keras.layers.Dense(5),
    ])
    model.compile(loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
                  optimizer="adam", metrics=['accuracy'])
    return model


model = create_model()
# Create earlystopping callback
early_stopping_callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=3)

history = model.fit(train_data,
                    validation_data=validation_data,
                    epochs=5,
                    callbacks=[early_stopping_callback])

# Plot training & validation accuracy values
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()

# Plot training & validation loss values
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()

y_pred = model.predict(validation_data)
y_pred = np.argmax(y_pred, axis=1)

samples = file_paths[0:3]
for i, sample in enumerate(samples):
    f = open(sample)
    text = f.read()
    print(text[0:100])
    print("True Class: ", sample.split("/")[0])
    print("Predicted Class: ", dir_names[y_pred[i]])
    f.close()

y_true = np.array(labels[train_size:])
print(classification_report(y_true, y_pred, target_names=dir_names))

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值