canopy算法 java_mahout源码canopy算法分析之二CanopyMapper

首先更正一点,前篇博客里面说到一个Canopy的测试的例子里面有这样的一句代码:

buildClusters(Configuration conf, Path input, Path output, DistanceMeasure measure, double t1, double t2, double t3, double t4, int clusterFilter, boolean runSequential)

我以前认为clusterFilter是分类的数目,其实应该是每个分类中至少含有的(样本数+1)才对,而非分类数,因为Canopy就是要得到分类数目的。

下面分析CanopyMapper:

首先把CanopyMapper改编为可以不依靠hadoop可以跑的代码,即纯java代码,如下:

package mahout.test.canopy.debug;import java.util.ArrayList;import java.util.Collection;import java.util.HashMap;import java.util.List;import java.util.Map;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.io.Text;import org.apache.mahout.clustering.canopy.Canopy;import org.apache.mahout.clustering.canopy.CanopyClusterer;import org.apache.mahout.clustering.canopy.CanopyConfigKeys;import org.apache.mahout.common.distance.DistanceMeasure;import org.apache.mahout.common.distance.ManhattanDistanceMeasure;import org.apache.mahout.math.RandomAccessSparseVector;import org.apache.mahout.math.Vector;import org.apache.mahout.math.VectorWritable;import com.google.common.collect.Lists;public class CanopyDebug {/** * canopy 测试程序 * @author fansy * time 2013/7/21 22:59 * edited at 2013/7/22 21:38 */private static final Collection canopies = Lists.newArrayList();private static CanopyClusterer canopyClusterer;private static int clusterFilter;private static int num=0;private static Map center=new HashMap();public static void main(String[] args) { // 初始化DistanceMeasure measure=new ManhattanDistanceMeasure();double t1=30.2;double t2=5.3;int clusterFilter =1;Configuration conf=init(measure,t1,t2,clusterFilter );// setup() 函数setup(conf);// map()函数 map();// cleanup() 函数Map result=cleanup();System.out.println("done..."+result); }/** * 初始化 * @param measure * @param t1 * @param t2 * @param clusterFileter:每个canopy至少含有的样本数 * @return */public static Configuration init(DistanceMeasure measure,double t1,double t2,int clusterFileter){Configuration conf=new Configuration();double t3=t1;double t4=t2; conf.set(CanopyConfigKeys.DISTANCE_MEASURE_KEY, measure.getClass() .getName()); conf.set(CanopyConfigKeys.T1_KEY, String.valueOf(t1)); conf.set(CanopyConfigKeys.T2_KEY, String.valueOf(t2)); conf.set(CanopyConfigKeys.T3_KEY, String.valueOf(t3)); conf.set(CanopyConfigKeys.T4_KEY, String.valueOf(t4)); conf.set(CanopyConfigKeys.CF_KEY, String.valueOf(clusterFilter)); return conf;}/** * map setup() 函数 * @param conf 输入初始参数 */public static void setup(Configuration conf){canopyClusterer = new CanopyClusterer(conf); clusterFilter = Integer.parseInt(conf.get( CanopyConfigKeys.CF_KEY));}/** * 获得初始数据 --> * @return */public static List makeInData(){List list=new ArrayList();VectorWritable vw=null;Vector vector=null;for(int i=0;i<10;i++){vw=new VectorWritable();vector=new RandomAccessSparseVector(3);vector.set(0, i%3*(i%3)*(i%3)*(i%3)+Math.random());vector.set(1, i%3*(i%3)*(i%3)*(i%3)+Math.random());vector.set(2, i%3*(i%3)*(i%3)*(i%3)+Math.random());vw.set(vector);list.add(vw);}return list;}/** * 模仿Mapper的map函数 */public static void map(){List vwList=makeInData2();for(VectorWritable point: vwList){canopyClusterer.addPointToCanopies(point.get(), canopies);}}/** * 模仿Mapper的cleanup函数 * @return */public static Map cleanup(){for (Canopy canopy : canopies) { canopy.computeParameters(); if (canopy.getNumObservations() > clusterFilter) { center.put(new Text("centroid"+num++), new VectorWritable(canopy .getCenter())); } }return center;}/** * 固定输入数据,产生输入数据的第二种方式,方便调试 * @return */public static List makeInData2(){List list=new ArrayList();VectorWritable vw=null;Vector vector=null;for(int i=0;i<10;i++){vw=new VectorWritable();vector=new RandomAccessSparseVector(3);vector.set(0, i%3*(i%3)*(i%3)*(i%3)+1);vector.set(1, i%3*(i%3)*(i%3)*(i%3)+1);vector.set(2, i%3*(i%3)*(i%3)*(i%3)+1);vw.set(vector);list.add(vw);}return list;}}

利用上面的代码可以直接进行调试,调试可以看到每一步运行的结果,很方便理解该算法。下面就结合实例进行说明:

输入数据(VectorWritable格式):

[1.0,1.0,1.0][2.0,2.0,2.0][17.0,17.0,17.0][1.0,1.0,1.0][2.0,2.0,2.0][17.0,17.0,17.0][1.0,1.0,1.0][2.0,2.0,2.0][17.0,17.0,17.0][1.0,1.0,1.0]

通过makeData2()函数即可获得上面转换成VectorWritable的数据,然后进入map函数,其中的for循环即是仿造CanopyMapper中的map函数了。map函数里面使用到的函数为:addPointToCanopies,打开CanopyCluster类可以看到这个函数的主要内容如下;public void addPointToCanopies(Vector point, Collection canopies) { boolean pointStronglyBound = false; for (Canopy canopy : canopies) { double dist = measure.distance(canopy.getCenter().getLengthSquared(), canopy.getCenter(), point); if (dist < t1) { if (log.isDebugEnabled()) { log.debug("Added point: {} to canopy: {}", AbstractCluster.formatVector(point, null), canopy.getIdentifier()); } canopy.observe(point); } pointStronglyBound = pointStronglyBound || dist < t2; } if (!pointStronglyBound) { if (log.isDebugEnabled()) { log.debug("Created new Canopy:{} at center:{}", nextCanopyId, AbstractCluster.formatVector(point, null)); } canopies.add(new Canopy(point, nextCanopyId++, measure)); } }

针对第一行的样本[1.0,1.0,1.0]因为canopies为空,所以直接 执行 canopies.add(new Canopy(point,nextCanopyId++,measure));这样就生成了第一个canopy,先看下Canopy这个类,其父类为DistanceMeasureCluster,父类为AbstractCluster,其属性有:private int id; private long numObservations; private long totalObservations; private Vector center; private Vector radius; // the observation statistics private double s0; private Vector s1; private Vector s2;

在这个函数里面canopy变化的属性只有 s0,s1,s2 ,s0代表这个canopy里面的样本数,s1代表所有样本的对应维度相加,s2代表所有样本所有维度先平方再对应相加;

针对第二行样本[2.0,2.0,2.0],dist=3(曼哈顿距离)

针对第三行样本[17,17,17]因为dist=16*3>t1=30.2所以直接执行canopies.add()方法,即又添加了一个canopy;后面的样本以此类推;

在CanopMapper类中最后执行的方法为cleanup函数,这里也仿造了一个cleanup函数,其实cleanup函数的主要作用就是一个过滤和设置值的过程函数对应为canopy.computeParameters(),过滤即是说一个canopy中的样本数必须要>clusterFilter才行,设置值即为设置radius和center值,具体代码参见:

setNumObservations((long) getS0()); setTotalObservations(getTotalObservations() + getNumObservations()); setCenter(getS1().divide(getS0())); // compute the component stds if (getS0() > 1) { setRadius(getS2().times(getS0()).minus(getS1().times(getS1())).assign(new SquareRootFunction()).divide(getS0())); } setS0(0); setS1(center.like()); setS2(center.like());

center的值设置为s1/s0,radius的值:A=[s2*s0-s1.*s1],(前一个是乘以,后面是点乘)然后把A的每一个维度的值开方并除以s0即可得到radius。

分享,快乐,成长

本文转载自:CSDN博客

欢迎加入我爱机器学习QQ14群:336582044

getqrcode.jpg

微信扫一扫,关注我爱机器学习公众号

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值