pca java weka,Weka中的PCA的时间太长运行

I am trying to use Weka for feature selection using PCA algorithm.

My original feature space contains ~9000 attributes, in 2700 samples.

I tried to reduce dimensionality of the data using the following code:

AttributeSelection selector = new AttributeSelection();

PrincipalComponents pca = new PrincipalComponents();

Ranker ranker = new Ranker();

selector.setEvaluator(pca);

selector.setSearch(ranker);

Instances instances = SamplesManager.asWekaInstances(trainSet);

try {

selector.SelectAttributes(instances);

return SamplesManager.asSamplesList(selector.reduceDimensionality(instances));

} catch (Exception e ) {

...

}

However, It did not finish to run within 12 hours. It is stuck in the method selector.SelectAttributes(instances);.

My questions are:

Is so long computation time expected for weka's PCA? Or am I using PCA wrongly?

If the long run time is expected:

How can I tune the PCA algorithm to run much faster? Can you suggest an alternative? (+ example code how to use it)?

If it is not:

What am I doing wrong? How should I invoke PCA using weka and get my reduced dimensionality?

Update: The comments confirms my suspicion that it is taking much more time than expected.

I'd like to know: How can I get PCA in java - using weka or an alternative library.

Added a bounty for this one.

解决方案

After deepening in the WEKA code, the bottle neck is creating the covariance matrix, and then calculating the eigenvectors for this matrix. Even trying to switch to sparsed matrix implementation (I used COLT's SparseDoubleMatrix2D) did not help.

The solution I came up with was first reduce the dimensionality using a first fast method (I used information gain ranker, and filtering based on document frequencey), and then use PCA on the reduced dimensionality to reduce it farther.

The code is more complex, but it essentially comes down to this:

Ranker ranker = new Ranker();

InfoGainAttributeEval ig = new InfoGainAttributeEval();

Instances instances = SamplesManager.asWekaInstances(trainSet);

ig.buildEvaluator(instances);

firstAttributes = ranker.search(ig,instances);

candidates = Arrays.copyOfRange(firstAttributes, 0, FIRST_SIZE_REDUCTION);

instances = reduceDimenstions(instances, candidates)

PrincipalComponents pca = new PrincipalComponents();

pca.setVarianceCovered(var);

ranker = new Ranker();

ranker.setNumToSelect(numFeatures);

selection = new AttributeSelection();

selection.setEvaluator(pca);

selection.setSearch(ranker);

selection.SelectAttributes(instances );

instances = selection.reduceDimensionality(wekaInstances);

However, this method scored worse then using a greedy information gain and a ranker, when I cross-validated for estimated accuracy.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值