目录
TensorFlow + JavaScript。现在,最流行,最先进的AI框架支持地球上使用最广泛的编程语言。因此,让我们在Web浏览器中通过深度学习使文本和NLP(自然语言处理)聊天机器人神奇地发生,使用TensorFlow.js通过WebGL加速GPU!
在第一个文本情感检测版本中,我们简单地标记了词汇“单词袋”中句子中包含的单词,以训练我们的神经网络。此方法有一些缺点,可能已经对您显而易见。
一个单词根据其上下文可能具有截然不同的含义。例如,短语"this strange seashell is pretty"和"this seashell is pretty strange"具有将被隐藏在短短的词集合不同的含义:[ "is","pretty","seashell","strange","this"。
这是转换器可以派上用场的地方。转换器支持最新的神经网络架构,该架构运行着著名的文本生成模型(如GPT-2和GPT-3)以及高质量语言翻译的最新突破。这也是支持句子级嵌入功能的技术,我们将使用它来改善情绪检测。
一些丰富的资源深入地介绍了转换器。对于这个项目,只要知道它们是一个神经网络模型就足够了,它可以分析单词及其在句子中的位置以推断其含义和重要性。如果您有兴趣阅读转换器,可以在以下地方开始:
使用通用语句编码器设置TensorFlow.js代码
这个项目与我们的第一个情绪检测代码非常相似,因此让我们以初始代码库为起点,去掉单词嵌入和预测部分。我们将在此处添加一个重要且功能强大的库,Universal Sentence Encoder,这是一个经过预训练的基于转换器的语言处理模型。这个库是我们用来从文本中为AI生成更有意义的输入向量的库。
<html>
<head>
<title>Detecting Emotion in Text: Chatbots in the Browser with TensorFlow.js</title>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@2.0.0/dist/tf.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/universal-sentence-encoder"></script>
</head>
<body>
<p id="text"></p>
<h1 id="status">Loading...</h1>
<script>
const emotions = [
"admiration",
"amusement",
"anger",
"annoyance",
"approval",
"caring",
"confusion",
"curiosity",
"desire",
"disappointment",
"disapproval",
"disgust",
"embarrassment",
"excitement",
"fear",
"gratitude",
"grief",
"joy",
"love",
"nervousness",
"optimism",
"pride",
"realization",
"relief",
"remorse",
"sadness",
"surprise",
"neutral"
];
function setText( text ) {
document.getElementById( "status" ).innerText = text;
}
function shuffleArray( array ) {
for( let i = array.length - 1; i > 0; i-- ) {
const j = Math.floor( Math.random() * ( i + 1 ) );
[ array[ i ], array[ j ] ] = [ array[ j ], array[ i ] ];
}
}
(async () => {
// Load GoEmotions data (https://github.com/google-research/google-research/tree/master/goemotions)
let data = await fetch( "web/emotions.tsv" ).then( r => r.text() );
let lines = data.split( "\n" ).filter( x => !!x ); // Split & remove empty lines
// Randomize the lines
shuffleArray( lines );
const numSamples = 200;
let sentences = lines.slice( 0, numSamples ).map( line => {
let sentence = line.split( "\t" )[ 0 ];
return sentence;
});
let outputs = lines.slice( 0, numSamples ).map( line => {
let categories = line.split( "\t" )[ 1 ].split( "," ).map( x => parseInt( x ) );
let output = [];
for( let i = 0; i < emotions.length; i++ ) {
output.push( categories.includes( i ) ? 1 : 0 );
}
return output;
});
// TODO: Generate Training Input
// Define our model with several hidden layers
const model = tf.sequential();
model.add(tf.layers.dense( { units: 100, activation: "relu", inputShape: [ 512 ] } ) );
model.add(tf.layers.dense( { units: 50, activation: "relu" } ) );
model.add(tf.layers.dense( { units: 25, activation: "relu" } ) );
model.add(tf.layers.dense( {
units: emotions.length,
activation: "softmax"
} ) );
model.compile({
optimizer: tf.train.adam(),
loss: "categoricalCrossentropy",
metrics: [ "accuracy" ]
});
const xs = tf.stack( vectors.map( x => tf.tensor1d( x ) ) );
const ys = tf.stack( outputs.map( x => tf.tensor1d( x ) ) );
await model.fit( xs, ys, {
epochs: 100,
shuffle: true,
callbacks: {
onEpochEnd: ( epoch, logs ) => {
setText( `Training... Epoch #${epoch} (${logs.acc})` );
console.log( "Epoch #", epoch, logs );
}
}
} );
// Test prediction every 5s
setInterval( async () => {
// Pick random text
let line = lines[ Math.floor( Math.random() * lines.length ) ];
let sentence = line.split( "\t" )[ 0 ];
let categories = line.split( "\t" )[ 1 ].split( "," ).map( x => parseInt( x ) );
document.getElementById( "text" ).innerText = sentence;
// TODO: Add Model Prediction Code
// let prediction = await model.predict( tf.stack( [ tf.tensor1d( vector ) ] ) ).data();
// Get the index of the highest value in the prediction
// let id = prediction.indexOf( Math.max( ...prediction ) );
// setText( `Result: ${emotions[ id ]}, Expected: ${emotions[ categories[ 0 ] ]}` );
}, 5000 );
})();
</script>
</body>
</html>
GoEmotion数据集
我们将使用Google Research GitHub存储库中可用的GoEmotions数据集中的第一个版本中使用的相同数据来训练我们的神经网络。它由58个英文Reddit评论(包含27种情感类别)组成。您可以根据需要使用完整的测试集,但对于该项目,我们只需要一个小的子集,因此可以下载此较小的测试集。将下载的文件放在项目文件夹中,您的网页可以在其中从本地Web服务器(如"web")检索该文件。
通用句子编码器
通用句子编码器(USE)是“一种[预训练]模型,可将文本编码为512维嵌入。” 它基于转换器的体系结构,并且已针对8000个单词的词汇进行了训练,这些单词生成了嵌入(句子编码向量),可以通过与其他潜在句子的相似程度来映射任何句子。如前所述,使用转换器时,该模型不仅考虑了句子中的单词,还考虑了它们的位置和重要性。
在计算与其他句子的相似度时,我们可以为任何句子生成唯一的ID。对于NLP任务,此功能非常强大,因为我们可以将文本数据集与嵌入相结合,作为输入到神经网络中,以学习和提取不同的特征,例如从文本中推断出情感。
USE非常易于使用。在定义我们的网络模型并为每个句子生成嵌入之前,让我们在代码中加载它。
// Load the universal sentence encoder
setText( "Loading USE..." );
let encoder = await use.load();
setText( "Loaded!" );
let embeddings = await encoder.embed( sentences );
训练AI模型
我们根本不需要更改模型——但是这里再次是刷新内存。
我们定义了一个具有三个隐藏层的网络,该网络输出长度为27(情感类别的数量)的分类矢量,其中最大值的索引是我们预测的情感标识符。
// Define our model with several hidden layers
const model = tf.sequential();
model.add(tf.layers.dense( { units: 100, activation: "relu", inputShape: [ allWords.length ] } ) );
model.add(tf.layers.dense( { units: 50, activation: "relu" } ) );
model.add(tf.layers.dense( { units: 25, activation: "relu" } ) );
model.add(tf.layers.dense( {
units: emotions.length,
activation: "softmax"
} ) );
model.compile({
optimizer: tf.train.adam(),
loss: "categoricalCrossentropy",
metrics: [ "accuracy" ]
});
并且我们可以更新训练输入以直接获取句子嵌入,因为它们已经作为张量返回。
const xs = embeddings;
const ys = tf.stack( outputs.map( x => tf.tensor1d( x ) ) );
await model.fit( xs, ys, {
epochs: 50,
shuffle: true,
callbacks: {
onEpochEnd: ( epoch, logs ) => {
setText( `Training... Epoch #${epoch} (${logs.acc})` );
console.log( "Epoch #", epoch, logs );
}
}
} );
让我们发现情绪
现在该看看我们升级后的情绪探测器的性能如何。
在预测情感类别的5秒计时器内,我们现在可以生成一个嵌入USE的句子作为输入。
看起来像这样。
// Test prediction every 5s
setInterval( async () => {
// Pick random text
let line = lines[ Math.floor( Math.random() * lines.length ) ];
let sentence = line.split( "\t" )[ 0 ];
let categories = line.split( "\t" )[ 1 ].split( "," ).map( x => parseInt( x ) );
document.getElementById( "text" ).innerText = sentence;
let vector = await encoder.embed( [ sentence ] );
let prediction = await model.predict( vector ).data();
// Get the index of the highest value in the prediction
let id = prediction.indexOf( Math.max( ...prediction ) );
setText( `Result: ${emotions[ id ]}, Expected: ${emotions[ categories[ 0 ] ]}` );
}, 5000 );
终点线
就这样。您印象深刻吗?这是我们的最终代码:
<html>
<head>
<title>Detecting Emotion in Text: Chatbots in the Browser with TensorFlow.js</title>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@2.0.0/dist/tf.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/universal-sentence-encoder"></script>
</head>
<body>
<p id="text"></p>
<h1 id="status">Loading...</h1>
<script>
const emotions = [
"admiration",
"amusement",
"anger",
"annoyance",
"approval",
"caring",
"confusion",
"curiosity",
"desire",
"disappointment",
"disapproval",
"disgust",
"embarrassment",
"excitement",
"fear",
"gratitude",
"grief",
"joy",
"love",
"nervousness",
"optimism",
"pride",
"realization",
"relief",
"remorse",
"sadness",
"surprise",
"neutral"
];
function setText( text ) {
document.getElementById( "status" ).innerText = text;
}
function shuffleArray( array ) {
for( let i = array.length - 1; i > 0; i-- ) {
const j = Math.floor( Math.random() * ( i + 1 ) );
[ array[ i ], array[ j ] ] = [ array[ j ], array[ i ] ];
}
}
(async () => {
// Load GoEmotions data (https://github.com/google-research/google-research/tree/master/goemotions)
let data = await fetch( "web/emotions.tsv" ).then( r => r.text() );
let lines = data.split( "\n" ).filter( x => !!x ); // Split & remove empty lines
// Randomize the lines
shuffleArray( lines );
const numSamples = 200;
let sentences = lines.slice( 0, numSamples ).map( line => {
let sentence = line.split( "\t" )[ 0 ];
return sentence;
});
let outputs = lines.slice( 0, numSamples ).map( line => {
let categories = line.split( "\t" )[ 1 ].split( "," ).map( x => parseInt( x ) );
let output = [];
for( let i = 0; i < emotions.length; i++ ) {
output.push( categories.includes( i ) ? 1 : 0 );
}
return output;
});
// Load the universal sentence encoder
setText( "Loading USE..." );
let encoder = await use.load();
setText( "Loaded!" );
let embeddings = await encoder.embed( sentences );
// Define our model with several hidden layers
const model = tf.sequential();
model.add(tf.layers.dense( { units: 100, activation: "relu", inputShape: [ 512 ] } ) );
model.add(tf.layers.dense( { units: 50, activation: "relu" } ) );
model.add(tf.layers.dense( { units: 25, activation: "relu" } ) );
model.add(tf.layers.dense( {
units: emotions.length,
activation: "softmax"
} ) );
model.compile({
optimizer: tf.train.adam(),
loss: "categoricalCrossentropy",
metrics: [ "accuracy" ]
});
const xs = embeddings;
const ys = tf.stack( outputs.map( x => tf.tensor1d( x ) ) );
await model.fit( xs, ys, {
epochs: 50,
shuffle: true,
callbacks: {
onEpochEnd: ( epoch, logs ) => {
setText( `Training... Epoch #${epoch} (${logs.acc})` );
console.log( "Epoch #", epoch, logs );
}
}
} );
// Test prediction every 5s
setInterval( async () => {
// Pick random text
let line = lines[ Math.floor( Math.random() * lines.length ) ];
let sentence = line.split( "\t" )[ 0 ];
let categories = line.split( "\t" )[ 1 ].split( "," ).map( x => parseInt( x ) );
document.getElementById( "text" ).innerText = sentence;
let vector = await encoder.embed( [ sentence ] );
let prediction = await model.predict( vector ).data();
// Get the index of the highest value in the prediction
let id = prediction.indexOf( Math.max( ...prediction ) );
setText( `Result: ${emotions[ id ]}, Expected: ${emotions[ categories[ 0 ] ]}` );
}, 5000 );
})();
</script>
</body>
</html>
下一步是什么?
通过利用基于转换器体系结构的强大的通用句子编码器库,我们提高了模型理解句子和区分句子的能力。这还能适用于回答聊天问题吗?
在本系列的下一篇文章中,使用TensorFlow.js的AI聊天机器人:改进的Trivia Expert。
https://www.codeproject.com/Articles/5282690/AI-Chatbots-With-TensorFlow-js-Improved-Emotion-De