全网最全-------FFM论文解读以及在tensorflow2实现FFM代码

本文详细介绍了FFM算法的论文解读,并提供了在tensorflow2中的实现过程。首先,通过思维导图帮助理解FFM的核心思想,接着使用tensorflow2构建FFM模型,包括数据预处理、模型构建和训练。代码示例展示了模型的搭建及训练步骤,适合进阶学习者参考。
摘要由CSDN通过智能技术生成

FFM论文解读以及在tensorflow2实现FFM代码


前言

网上关于FFM的论文解读都是千篇一律,于是乎,精选网上各种对于FFM的分析制作一个思维导图,同时也借鉴网上其他人的tensorflow2的代码,实现了FFM的代码,相信作者,照着本篇文章推荐的内容保证你快速深入理解FFM算法思想。

一、论文解读以及论文分析

我将网上写的一些不错的回答制作成一个思维导图,里面有对FFM的分析,照着这个思路,能迅速的对论文思想有一个清晰地认识。链接如下
FFM论文学习流程
每一个模块都精选我自己认为写的最好的文章,分别点击对应的链接进行学习即可
在这里插入图片描述

看完论文之后,那么必须要手动实现一下代码,下面将介绍一下在tensorflow2中实现FFM的过程

二、tensorflow2实现FFM代码

1.导库

import tensorflow as tf
from tensorflow.keras import layers, optimizers
from tensorflow import keras

import numpy as np
import pandas as pd

2.数据预处理

# 2.数据预处理
def preprocess(x, y):
    x = tf.cast(x, dtype=tf.float64)
    y = tf.cast(y, dtype=tf.int64)
    return x, y


from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split

data = load_breast_cancer()
x_train, x_test, y_train, y_test = train_test_split(data.data, data.target, test_size=0.2, random_state=111,
                                                    stratify=data.target)
print(x_train.shape, x_test.shape, y_train.shape, y_test.shape)

train_db = tf.data.Dataset.from_tensor_slices((np.array(x_train), y_train))
train_db = train_db.shuffle(123).map(preprocess).batch(32)
print(train_db)
test_db = tf.data.Dataset.from_tensor_slices((np.array(x_test), y_test))
test_db = test_db.shuffle(123).map(preprocess).batch(32)

sample = next(iter(train_db))
print('sample:', sample[0].shape, sample[1].shape,
      tf.reduce_min(sample[0]), tf.reduce_max(sample[0]))

3.建立模型并测试运行

class FFM(keras.Model):
    def __init__(self, field_num, feature_field_dict, dim_num, k=8):
        super(FFM, self).__init__()
        self.field_num = field_num
        self.k = k
        self.feature_field_dict = feature_field_dict
        self.dim_num = dim_num

    def build(self, input_shape):
        self.fc = tf.keras.layers.Dense(units=1,
                                        bias_regularizer=tf.keras.regularizers.l2(0.01),
                                        kernel_regularizer=tf.keras.regularizers.l1(0.02))
        self.w = self.add_weight(shape=(input_shape[-1], self.field_num, self.k),
                                 initializer='glorot_uniform',
                                 trainable=True)
        super(FFM, self).build(input_shape)

    def call(self, x, training):
        linear = self.fc(x)
        temp = tf.cast(0, tf.float32)
        temp = tf.expand_dims(temp, axis=0)
        for j1 in range(self.dim_num):
            for j2 in range(j1 + 1, self.dim_num):
                f1 = self.feature_field_dict[j2]
                f2 = self.feature_field_dict[j1]
                # [, , k] * [, , k] = [, , k] -> [1, k]
                ww = tf.expand_dims(tf.multiply(self.w[j1, f2, :], self.w[j2, f1, :]), axis=0)
                # print(ww)
                # [x, ] * [x, ] = [x, ] -> [x, 1]
                xx = tf.expand_dims(tf.multiply(x[:, j1], x[:, j2]), axis=1)
                # print(xx)
                # [x, 1] @ [1, k] = [x, k]
                store = tf.matmul(xx, ww)
                # print(store)
                # [x, k] -> [x]
                temp += tf.reduce_mean(store, keepdims=True, axis=1)
                print(temp)
        out = layers.Add()([linear, temp])
        return tf.sigmoid(out)


# store = {}
# for i in range(30):
#     store[i] = int(i / 15)
# model = FFM(field_num=2, feature_field_dict=store, dim_num=30)
# model.build((None, 30))
# model.summary()

def main():
    store = {}
    for i in range(30):
        store[i] = int(i / 15)  # 实际要根据数据字段含义定义,这里只是做一个随意的分组
    model = FFM(field_num=2, feature_field_dict=store, dim_num=30)
    optimizer = optimizers.Adam(lr=1e-2)
    for epoch in range(10):
        for step, (x, y) in enumerate(train_db):
            with tf.GradientTape() as tape:
                logits = model(x, training=True)
                loss = tf.reduce_mean(tf.losses.binary_crossentropy(y, logits))
                loss_regularization = []
                for i in model.trainable_variables:
                    loss_regularization.append(tf.nn.l2_loss(i))
                loss_regularization = tf.reduce_sum(tf.stack(loss_regularization))
                loss = 0.001 * loss_regularization + loss
            grads = tape.gradient(loss, model.trainable_variables)
            optimizer.apply_gradients(zip(grads, model.trainable_variables))
            print(epoch, step, 'loss:', float(loss))

        total_num = 0
        total_correct = 0
        for x, y in test_db:
            pred = model(x, training=False)
            pred = tf.squeeze(pred)
            pred = pred > 0.5
            pred = tf.cast(pred, dtype=tf.int64)
            correct = tf.cast(tf.equal(pred, y), tf.int64)
            correct = tf.reduce_sum(correct)
            total_num += x.shape[0]
            total_correct += int(correct)
        acc = total_correct / total_num
        print(epoch, 'acc:', acc)
        print("-" * 25)


if __name__ == '__main__':
    main()

建议不懂得自己阅读调试代码,会更加深刻的理解论文思想

总结

本文就详细介绍了FFM论文的思想(其它人的解读)以及如何实现的过程,希望对读者朋友们有所帮助

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值