1 - Factorization Machines ( Steffen Rendle, 2010 )

Factorization Machines
摘要由CSDN通过智能技术生成

ABSTRACT

In this paper, we introduce Factorization Machines (FM) which are a new model class that combines the advantages of Support Vector Machines (SVM) with factorization models.

由上可知,FM模型是一种新的模型,其综合了SVM模型与factorization模型的优点。

factorization models的基本形式为: f(x)=q1(x)q2(x)q3(x)...qt(x)

Like SVMs, FMs are a general predictor working with any real valued feature vector. In contrast to SVMs, FMs model all interactions between variables using factorized parameters. Thus they are able to estimate interactions even in problems with huge sparsity (like recommender systems) where SVMs fail.

在特征较为稀疏的情况下,如推荐系统等,SVM模型不再适用
为了解决该问题,FM模型引入factorized parameters,该参数用于对交叉特征进行学习

We show that the model equation of FMs can be calculated in linear time and thus FMs can be optimized directly. So unlike nonlinear SVMs, a transformation in the dual form is not necessary and the model parameters can be estimated directly without the need of any support vector in the solution. We show the relationship to SVMs and the advantages of FMs for parameter estimation in sparse settings.

FM模型避免了SVM模型在训练时的弊端,其模型复杂度为 O(n)

On the other hand there are many different factorization models like matrix factorization, parallel factor analysis or specialized models like SVD++, PITF or FPMC. The drawback of these models is that they are not applicable for general prediction tasks but work only with special input data. Furthermore their model equations and optimization algorithms are derived individually for each task.

factorization models的两个缺点:

  • 对输入的数据有限制,如推荐系统中,输入数据的形式为:uid, sid, score
  • 模型及其所采用的优化方法需要根据具体的task进行定制;

We show that FMs can mimic these models just by specifying the input data (i.e. the feature vectors). This makes FMs easily applicable even for users without expert knowledge in factorization models.

I. INTRODUCTION

Support Vector Machines are one of the most popular predictors in machine learning and data mining. Nevertheless in settings like collaborative filtering, SVMs play no important role and the best models are either direct applications of standard matrix/ tensor factorization models like PARAFAC [1] or specialized models using factorized parameters [2], [3], [4].
In this paper, we show that the only reason why standard SVM predictors are not successful in these tasks is that they cannot learn reliable parameters (‘hyperplanes’) in complex (non-linear) kernel spaces under very sparse data.

重点关注论文中,如何证明为什么non-linear SVM不适用于稀疏数据集。

On the other hand, the drawback of tensor factorization models and even more for specialized factorization models is that
(1) they are not applicable to standard prediction data (e.g. a real valued feature vector in n .)
(2) that specialized models are usually derived individually for a specific task requiring effort in modeling and design of a learning algorithm.

In this paper, we introduce a new predictor, the Factorization Machine (FM), that is a general predictor like SVMs but is also able to estimate reliable parameters under very high sparsity.
The factorization machine models all nested variable interactions(comparable to a polynomial kernel in SVM), but uses a factorized parametrization instead of a dense parametrization like in SVMs.

polynomial kernel:多项式核函数,形式为 Kn(X,X)=(1+γXTX)n,γ>0

polynomial kernel-SVM模型,其模型参数采用dense parametrization,而FM模型参数采用factorized parametrization

We show that the model equation of FMs can be computed in linear time and that it depends only on a linear number of parameters. This allows direct optimization and storage of model parameters without the need of storing any training data (e.g. support vectors) for prediction. In contrast to this, non-linear SVMs are usually optimized in the dual form and computing a prediction (the model equation) depends on parts of the training data (the support vectors).
We also show that FMs subsume many of the most successful approaches for the task of collaborative filtering including biased MF, SVD++ [2], PITF [3] and FPMC [4].

In total, the advantages of our proposed FM are:
1) FMs allow parameter estimation under very sparse data where SVMs fail.
2) FMs have linear complexity, can be optimized in the primal and do not rely on support vectors like SVMs. We show that FMs scale to large datasets like Netflix with 100 millions of training instances.
3) FMs are a general predictor that can work with any real valued feature vector. In contrast to this, other state-of-the-art factorization models work only on very restricted input data. We will show that just by defining the feature vectors of the input data, FMs can mimic state-of-the-art models like biased MF, SVD++, PITF or FPMC.

II. PREDICTION UNDER SPARSITY

The most common prediction task is to estimate a function y:nT from a real valued feature vector xn to a target domain T (e.g. T= for regression or T={ +,} for classification). In supervised settings, it is assumed that there is a training dataset D={ (x(1),y(1)),(x(2),y(2)),...} of examples for the target function y given.We also investigate the ranking task where the function y with target T= can be used to score feature vectors x and sort them according to their score. Scoring functions can be learned with pairwise training data [5], where a feature tuple (x(A),x(B))D means that x(A) should be ranked higher than x(B) . As the pairwise ranking relation is antisymmetric, it is sufficient to use only positive training instances.

In this paper, we deal with problems where x is highly sparse, i.e. almost all of the elements xi of a vector x are zero. Let m(x) be the number of non-zero elements in the feature vector x and mD be the average number of non-zero elements m(x) of all vectors xD . Huge sparsity ( mDn ) appears in many real-world data like feature vectors of event transactions (e.g. purchases in recommender systems) or text analysis (e.g. bag of word approach). One reason for huge sparsity is that the underlying problem deals with large categorical variable domains.

Example 1 Assume we have the transaction data of a movie review system. The system records which user uU rates a movie (item) iI at a certain time tR with a rating r{ 1,2,3,4,5} . Let the users U and items I be:

U={ Alice(A),Bob(B),Charlie(C),...}I={ Titanic(TI),Notting Hill(NH),Star Wars(SW),Star Trek(ST),...}

Let the observed data S be:
S={(A,TI,20101,5),(A,NH,20102,3),(A,SW,20104,1),(B,SW,20095,4),(B,ST,20098,5),(C,TI,20099,1),(C,SW,200912,5)}

An example for a prediction task using this data, is to estimate a function yˆ that predicts the rating behavior of a user for an item at a certain point in time.

Figure 1 shows one example of how feature vectors can be created from S for this task. Here, first there are |U| binary indicator variables (blue) that represent the active user of a transaction – there is always exactly one active user in each transaction (u,i,t,r)S , e.g. user Alice in the first one (x(1)A=1) . The next |I| binary indicator variables (red) hold the active item – again there is always exactly one active item (e.g. x(1)TI=1 ). The feature vectors in figure 1 also contain indicator variables (yellow) for all the other movies the user has ever rated. For each user, the variables are normalized such that they sum up to 1. E.g. Alice has rated Titanic, Notting Hill and Star Wars. Additionally the example contains a variable (green) holding the time in months starting from January, 2009. And finally the vector contains information of the last movie (brown) the user has rated before (s)he rated the active one – e.g. for x(2) , Alice rated Titanic before she rated Notting Hill. In section V, we show how factorization machines using such feature vectors as input data are related to specialized state-of-the-art factorization models.

Fig. 1

We will use this example data throughout the paper for illustration. However please note that FMs are general predictors like SVMs and thus are applicable to any real valued feature vectors and are not restricted to recommender systems.

III. FACTORIZATION MACHINES (FM)

In this section, we introduce factorization machines. We discuss the model equation in detail and show shortly how to apply FMs to several prediction tasks.

A. Factorization Machine Model

1) Model Equation: The model equation for a factorization machine of degree d=2 is defined as:

yˆ(x):=w0+i=1nwixi+i=1nj=i+1nvi,vjxixj

where the model parameters that have to be estimated are:
w0,wn,Vn×k

A row vi within V describes the i -th variable with k factors. k+0 is a hyperparameter that defines the dimensionality of the factorization.

A 2-way FM (degree d=2 ) captures all single and pairwise interactions between variables:

  • w0 is the global bias.
  • wi models the strength of the i -th variable.
  • wˆi,j:=vi,vj models the interaction between the i -th and j -th variable. Instead of using an own model parameter wi,j for each interaction, the FM models the interaction by factorizing it. We will see later on, that this is the key point which allows high quality parameter estimates of higher order interactions ( d2 ) under sparsity.

2) Expressiveness: It is well known that for any positive definite matrix W , there exists a matrix V such that W=

  • 0
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值