每周一文(十)华为deepFM模型

契机

wide & deep 模型的wide侧特征需要人工构造,这个很吃工作量,而且有可能人工构造的特征也不太全,deepFM的思想是将特征工程的工作交给FM来做,这样提取到的二维交叉隐向量更全面。另一个较为重要的改动为deepFM模型wide侧和deep侧是共享底层embedding特征的,这样两侧模型的输入特征更多,因而效果会更好。

模型结构

FM component

文章中的FM和传统的FM有些不一样,这里FM的隐向量并不是额外创建的,而是每个一维特征对应的embedding向量,这里也为后续的Deep component埋下了伏笔。图中的"Addition"操作代表的是一阶特征的计算,"Inner Product"操作代表的是二阶交叉特征计算。

deep component

可以看出,deep层输入特征依然是FM层的embedding,这里就体现了FM和deep之间的特征共享的观念。

总体结构

模型的输出为如下公式,之后进行交叉熵损失计算。

y ^ = s i g m o i d ( y F M + y D N N ) \hat{y}=sigmoid(y_{FM}+y_{DNN}) y^=sigmoid(yFM+yDNN)

deepFM VS wide&deep

  1. deepFM和wide&deep在deep层的结构是一样的。
  2. deepFM在wide侧不需要进行特征工程,完全交给FM。
  3. deepFM中wide&deep在底层输入embedding共享,个人理解这里也说明deepFM要比wide&deep笨重,因为wide层和deep层不能分开进行模型调优,在这方面wide&deep要表现的更加灵活一些。

代码实现

这里转载chenchenglong巨神的代码实现来讲述deepFM的实现过程:

def _init_graph(self):
    self.graph = tf.Graph()
    with self.graph.as_default():
        tf.set_random_seed(self.random_seed)

        self.feat_index = tf.placeholder(tf.int32, shape=[None, None],
                                             name="feat_index")  # None * F
        self.feat_value = tf.placeholder(tf.float32, shape=[None, None],
                                             name="feat_value")  # None * F
        self.label = tf.placeholder(tf.float32, shape=[None, 1], name="label")  # None * 1
        self.dropout_keep_fm = tf.placeholder(tf.float32, shape=[None], name="dropout_keep_fm")
        self.dropout_keep_deep = tf.placeholder(tf.float32, shape=[None], name="dropout_keep_deep")
        self.train_phase = tf.placeholder(tf.bool, name="train_phase")

        self.weights = self._initialize_weights()

        # model
        self.embeddings = tf.nn.embedding_lookup(self.weights["feature_embeddings"],
                                                         self.feat_index)  # None * F * K
        feat_value = tf.reshape(self.feat_value, shape=[-1, self.field_size, 1])
        self.embeddings = tf.multiply(self.embeddings, feat_value)

        # ---------- FM first order term ----------
        self.y_first_order = tf.nn.embedding_lookup(self.weights["feature_bias"], self.feat_index) # None * F * 1
        self.y_first_order = tf.reduce_sum(tf.multiply(self.y_first_order, feat_value), 2)  # None * F
        self.y_first_order = tf.nn.dropout(self.y_first_order, self.dropout_keep_fm[0]) # None * F

        # ---------- FM second order term ---------------
        # sum_square part
        self.summed_features_emb = tf.reduce_sum(self.embeddings, 1)  # None * K
        self.summed_features_emb_square = tf.square(self.summed_features_emb)  # None * K

        # square_sum part
        self.squared_features_emb = tf.square(self.embeddings)
        self.squared_sum_features_emb = tf.reduce_sum(self.squared_features_emb, 1)  # None * K

        # second order
        self.y_second_order = 0.5 * tf.subtract(self.summed_features_emb_square, self.squared_sum_features_emb)  # None * K
        self.y_second_order = tf.nn.dropout(self.y_second_order, self.dropout_keep_fm[1])  # None * K

        # ---------- Deep component ----------
        self.y_deep = tf.reshape(self.embeddings, shape=[-1, self.field_size * self.embedding_size]) # None * (F*K), 将embedding向量平铺
        self.y_deep = tf.nn.dropout(self.y_deep, self.dropout_keep_deep[0])
        for i in range(0, len(self.deep_layers)):
            self.y_deep = tf.add(tf.matmul(self.y_deep, self.weights["layer_%d" %i]), self.weights["bias_%d"%i]) # None * layer[i] * 1
            if self.batch_norm:
                self.y_deep = self.batch_norm_layer(self.y_deep, train_phase=self.train_phase, scope_bn="bn_%d" %i) # None * layer[i] * 1
            self.y_deep = self.deep_layers_activation(self.y_deep)
            self.y_deep = tf.nn.dropout(self.y_deep, self.dropout_keep_deep[1+i]) # dropout at each Deep layer

        # ---------- DeepFM ----------
        if self.use_fm and self.use_deep:
            concat_input = tf.concat([self.y_first_order, self.y_second_order, self.y_deep], axis=1) # FM的一阶、二阶以及deep部分concat起来
        elif self.use_fm: 
            concat_input = tf.concat([self.y_first_order, self.y_second_order], axis=1)
        elif self.use_deep:
            concat_input = self.y_deep
        self.out = tf.add(tf.matmul(concat_input, self.weights["concat_projection"]), self.weights["concat_bias"])

        # loss
        if self.loss_type == "logloss":
            self.out = tf.nn.sigmoid(self.out)
            self.loss = tf.losses.log_loss(self.label, self.out)
        elif self.loss_type == "mse":
            self.loss = tf.nn.l2_loss(tf.subtract(self.label, self.out))
        # l2 regularization on weights
        if self.l2_reg > 0:
            self.loss += tf.contrib.layers.l2_regularizer(
                self.l2_reg)(self.weights["concat_projection"])
            if self.use_deep:
                for i in range(len(self.deep_layers)):
                    self.loss += tf.contrib.layers.l2_regularizer(
                        self.l2_reg)(self.weights["layer_%d"%i])

        # optimizer
        if self.optimizer_type == "adam":
            self.optimizer = tf.train.AdamOptimizer(learning_rate=self.learning_rate, beta1=0.9, beta2=0.999,
                                                    epsilon=1e-8).minimize(self.loss)
        elif self.optimizer_type == "adagrad":
            self.optimizer = tf.train.AdagradOptimizer(learning_rate=self.learning_rate,
                                                       initial_accumulator_value=1e-8).minimize(self.loss)
        elif self.optimizer_type == "gd":
            self.optimizer = tf.train.GradientDescentOptimizer(learning_rate=self.learning_rate).minimize(self.loss)
        elif self.optimizer_type == "momentum":
            self.optimizer = tf.train.MomentumOptimizer(learning_rate=self.learning_rate, momentum=0.95).minimize(
                self.loss)
        elif self.optimizer_type == "yellowfin":
            self.optimizer = YFOptimizer(learning_rate=self.learning_rate, momentum=0.0).minimize(
                self.loss)

def _initialize_weights(self):
    weights = dict()

    # embeddings
    weights["feature_embeddings"] = tf.Variable(
        tf.random_normal([self.feature_size, self.embedding_size], 0.0, 0.01),
        name="feature_embeddings")  # feature_size * K, 这里feature_size指的是feature的总size
    weights["feature_bias"] = tf.Variable(
        tf.random_uniform([self.feature_size, 1], 0.0, 1.0), name="feature_bias")  # feature_size * 1

    # deep layers
    num_layer = len(self.deep_layers)
    input_size = self.field_size * self.embedding_size # field_size指的是每一个样本所对应的具体哪些field有值
    glorot = np.sqrt(2.0 / (input_size + self.deep_layers[0]))
    weights["layer_0"] = tf.Variable(
        np.random.normal(loc=0, scale=glorot, size=(input_size, self.deep_layers[0])), dtype=np.float32) # 平铺
    weights["bias_0"] = tf.Variable(np.random.normal(loc=0, scale=glorot, size=(1, self.deep_layers[0])),
                                                    dtype=np.float32)  # 1 * layers[0]
    for i in range(1, num_layer):
        glorot = np.sqrt(2.0 / (self.deep_layers[i-1] + self.deep_layers[i]))
        weights["layer_%d" % i] = tf.Variable(
            np.random.normal(loc=0, scale=glorot, size=(self.deep_layers[i-1], self.deep_layers[i])),
            dtype=np.float32)  # layers[i-1] * layers[i]
        weights["bias_%d" % i] = tf.Variable(
            np.random.normal(loc=0, scale=glorot, size=(1, self.deep_layers[i])),
            dtype=np.float32)  # 1 * layer[i]

    # final concat projection layer
    if self.use_fm and self.use_deep:
        input_size = self.field_size + self.embedding_size + self.deep_layers[-1]
    elif self.use_fm:
        input_size = self.field_size + self.embedding_size
    elif self.use_deep:
        input_size = self.deep_layers[-1]
    glorot = np.sqrt(2.0 / (input_size + 1))
    weights["concat_projection"] = tf.Variable(
                    np.random.normal(loc=0, scale=glorot, size=(input_size, 1)),
                    dtype=np.float32)  # layers[i-1]*layers[i]
    weights["concat_bias"] = tf.Variable(tf.constant(0.01), dtype=np.float32)

    return weights
  • 2
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值