[HBase] Hbase Coprocessors

23 篇文章 0 订阅

本文是笔者学习过程中的简单笔记,日后会逐渐增加内容,主要参考资料是《Hbase The Definitive Guide》。

我们可以通过Filter来减少从Server到Client在网络上传输的数据总量,以提升效率。通过HBase的Coprocessor特性,我们甚至可以将计算(computation)移动到数据所在的节点。

Introduction to Coprocessors

coprocessor使你能够直接在每个region server上执行任意的代码。更精确地说,它提供一些通过事件触发的功能,以region为基础执行code;这很像关系型数据库系统中的procedures(存储过程)。

在使用coprocessor时,你需要基于特定的interface创建专门的类,以jar包的形式提供给region server (如:可以将jar包放到$HBASE_HOME/lib/目录下)。这些coprocessor类可以通过配置文件静态加载,也可以在程序代码中动态加载。


corpocessor 框架提供了两种coprocessor基类:

1.Observer

这种coprocessor跟触发器相像:当特定的时间发生时,回调函数就会执行。

RegionObserver

处理数据操纵事件(data manipulationevents),这种coprocessor是和表的region紧密相连的。可以看作DML Coprocessor

MasterObserver

处理数据管理事件,是cluster范围的coprocessor。可以看做DDL Coprocessor

WALObserver

处理 write-ahead log processing 事件

2.Endpoint


The Coprocessor Class

所有的coprocessor类必须实现org.apache.hadoop.hbase.Coprocessor接口。

1.属性

PRIORITY_HIGHEST,PRIORITY_SYSTEM,PRIORITY_USER,PRIORITY_LOWEST四个静态常量表示coprocessor的优先级.值越低优先级越高。

2.方法

start(env)  stop(env) :这两个方法在coprocessor开始及退役的时候被调用(these two methods are called when the  coprocessor class is started,and eventually when it is decommissioned)

evn参数用来保存coprocessor整个生命周期的状态。

package org.apache.hadoop.hbase;

import java.io.IOException;

/**
 * Coprocess interface.
 */
public interface Coprocessor {
  static final int VERSION = 1;

  /** Highest installation priority */
  static final int PRIORITY_HIGHEST = 0;
  /** High (system) installation priority */
  static final int PRIORITY_SYSTEM = Integer.MAX_VALUE / 4;
  /** Default installation priority for user coprocessors */
  static final int PRIORITY_USER = Integer.MAX_VALUE / 2;
  /** Lowest installation priority */
  static final int PRIORITY_LOWEST = Integer.MAX_VALUE;

  /**
   * Lifecycle state of a given coprocessor instance.
   */
  public enum State {
    UNINSTALLED,
    INSTALLED,
    STARTING,
    ACTIVE,
    STOPPING,
    STOPPED
  }

  // Interface
  void start(CoprocessorEnvironment env) throws IOException;

  void stop(CoprocessorEnvironment env) throws IOException;
}

Coprocessor Loading 加载coprocessor

静态加载和动态加载。

静态加载:在hbase-site.xml中做类似下面的配置

<property>
	<name>hbase.coprocessor.region.classes</name>
	<value>coprocessor.RegionObserverExample,coprocessor.AnotherCoprocessor</value>
</property>
<property>
	<name>hbase.coprocessor.master.classes</name>
	<value>coprocessor.MasterObserverExample</value>
</property>
<property>
	<name>hbase.coprocessor.wal.classes</name>
	<value>coprocessor.WALObserverExample,bar.foo.MyWALObserver</value>
</property>


动态加载:通过table descriptor提供的接口实现;看下面的例子,创建表testtable,动态加载RegionObserverExample到该表的region

public class LoadWithTableDescriptorExample {
		
	public static void main(String[] args) throws IOException
	{
		Configuration conf = HBaseConfiguration.create();
		FileSystem fs = FileSystem.get(conf);
		//coprocessor所在的jar包的存放路径
		Path path = new Path(fs.getUri() + Path.SEPARATOR +"test/coprocessor/"+
				"test.jar");
		//HTableDescriptor
		HTableDescriptor htd = new HTableDescriptor("testtable");
		//addFamily
		htd.addFamily(new HColumnDescriptor("colfam1"));
		//
		//设置要加载的corpocessor
		htd.setValue("COPROCESSOR$1", path.toString() +
				"|" + RegionObserverExample.class.getCanonicalName() +
				"|" + Coprocessor.PRIORITY_USER);
		//
		HBaseAdmin admin = new HBaseAdmin(conf);
		
		//创建表"testtable"
		admin.createTable(htd);
		
		System.out.println("end");
	}
}

下面是RegionObserverExample类的实现, 编译通过后,将该类打包成test.jar,并上传到hdfs://master:9000/test/coprocessor目录下

package coprocessor;

import java.io.IOException;
import java.sql.Date;
import java.util.List;
import org.apache.commons.net.ntp.TimeStamp;
import org.apache.hadoop.hbase.KeyValue;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.coprocessor.BaseRegionObserver;
import org.apache.hadoop.hbase.coprocessor.ObserverContext;
import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
import org.apache.hadoop.hbase.util.Bytes;

public class RegionObserverExample extends
	BaseRegionObserver {
	
	public static final byte[] FIXED_ROW =
			Bytes.toBytes("@@@GETTIME@@@");
	//实现功能:用get查询 "@@@GETTIME@@@"行时,以字节数组形式返回系统时间
	@Override
	public void preGet(
			final ObserverContext<RegionCoprocessorEnvironment> e,
			final Get get, final List<KeyValue> results) throws
			IOException {
				if (Bytes.equals(get.getRow(), FIXED_ROW)) {
					KeyValue kv = new KeyValue(get.getRow(), FIXED_ROW,
							FIXED_ROW,
							Bytes.toBytes(System.currentTimeMillis()));
					results.add(kv);
				}
	}
	
	public static void main(String args[]){
		
		System.out.println("complete!");
	}
}


Endpoints

  前面提到的RegionObserver例子通过已知的row key参数,将列计算功能添加到get请求期间。看起来这足以实现其他功能,比如恩能够返回所有给定列的value的和的聚合函数。然而,RegionObserver并不能实现上述功能,因为row key 决定了由哪个region处理request,这样就只能将计算请求(computation request)发送到单一的server上。

HBase为了克服上述RegionObserver的局限性,由coprocessor框架提供了一个动态调用实现(a dynamic call implementation),称作endpoint concept.


The CoprocessorProtocol interface


The BaseEndpointCoprocessor class

实现一个endpoint包括以下两个步骤

1.Extend the CoprocessorProtocol interface

2.Extend the BaseEndpointCoprocessor class

下面是一个小例子,实现功能:客户端通过远程调用检索每个region的行数和KeyValue的个数。


1.RowCountProtocol interface, code:

public interface RowCountProtocol extends CoprocessorProtocol {
	//获取行数
	long getRowCount() throws IOException;
	//获取应用Filter后的结果集的行数
	long getRowCount(Filter filter) throws IOException;
	//获取KeyValue的个数
	long getKeyValueCount() throws IOException;
}
2.RowCountEndPoint class, code:
public class RowCountEndPoint extends BaseEndpointCoprocessor implements
		RowCountProtocol {

	public RowCountEndPoint() {
		// TODO Auto-generated constructor stub
	}

	@Override
	public long getRowCount() throws IOException {
		// TODO Auto-generated method stub
		return this.getRowCount(new FirstKeyOnlyFilter());
	}

	@Override
	public long getRowCount(Filter filter) throws IOException {
		// TODO Auto-generated method stub

		
		return this.getRowCount(filter,false);
	}

	@Override
	public long getKeyValueCount() throws IOException {
		// TODO Auto-generated method stub
		return this.getRowCount(null,true);
	}
	
	
	public long getRowCount(Filter filter,boolean countKeyValue) throws IOException {
		// TODO Auto-generated method stub
		Scan scan =new Scan();
		scan.setMaxVersions(1);
		if(filter !=null){
			scan.setFilter(filter);
		}
		
		RegionCoprocessorEnvironment environment=
				(RegionCoprocessorEnvironment) this.getEnvironment();
		
		//使用内部scanner做扫描。
		InternalScanner scanner = environment.getRegion().getScanner(scan);
		//
		long result=0;
		
		//计数
		try{
			boolean done=false;
			List<KeyValue> curValue = new ArrayList<KeyValue>();
			do{
				curValue.clear();
				done=scanner.next(curValue);
				result+=countKeyValue?curValue.size():1;
				
			}while(done);
			
		}catch(Exception e){
			e.printStackTrace();
		}finally{
			scanner.close();
		}

		return result;
	}

	/**
	 * @param args
	 */
	public static void main(String[] args) {
		// TODO Auto-generated method stub
		System.out.println("success!");
	}
}

3.

3.1将上述类打包到my_coprocessor.jar, copy到各个RegionServer节点的 $HBASE_HOME/lib目录下;

3.2修改$HBASE_HOME/conf/hbase-site.xml配置文件,添加如下信息:

<property>
                <name>hbase.coprocessor.region.classes</name>
                <value>
                        coprocessor.RegionObserverExample,
                        coprocessor.RowCountEndPoint
                </value>
        </property>
3.3 重启HBase Cluster

4.通过客户端调用之前定义的EndPoint Coprocessor

public class EndPointExample {

	/**
	 * @author mango_song
	 * @param args
	 * @throws IOException 
	 */
	public static void main(String[] args) throws IOException {
		// TODO Auto-generated method stub
		
		Configuration conf = HBaseConfiguration.create();
		
		HTable table =new HTable(conf,"test");
		
		try {
			//
			/*table.coprocessorExec 函数的描述信息:
			 *  <RowCountProtocol, Long> Map<byte[], Long> org.apache.hadoop.hbase.client.HTable.coprocessorExec(
			 *  	Class<RowCountProtocol> protocol,
			 *      byte[] startKey, byte[] endKey,
			 *      Call<RowCountProtocol, Long> callable)
			 *      	 throws IOException, Throwable
					Invoke the passed org.apache.hadoop.hbase.client.coprocessor.Batch.Call
				 	against the CoprocessorProtocol instances running in the selected regions.
				 	All regions beginning with the region containing the startKey row, 
				 	through to the region containing the endKey row (inclusive) will be used.
				  	If startKey or endKey is null, the first and last regions in the table,
				   	respectively, will be used in the range selection.
			Specified by: coprocessorExec(...) in HTableInterface
			Parameters:
				protocol the CoprocessorProtocol implementation to call
				startKey start region selection with region containing this row
				endKey select regions up to and including the region containing this row
				callable wraps the CoprocessorProtocol implementation method calls made per-region
			Returns:
				a Map of region names to org.apache.hadoop.hbase.client.coprocessor.Batch.Call.call(Object) return values
			Throws:
				IOException
				Throwable
			 */
			Map<byte[], Long> results=table.coprocessorExec(
					RowCountProtocol.class,
					null, 
					null, 
					new Batch.Call<RowCountProtocol, Long>() {

						@Override
						public Long call(RowCountProtocol instance)
								throws IOException {
							// TODO Auto-generated method stub
							return instance.getRowCount();
						}
					}
			);
			
			long total =0;
			//打印出每个region的行数及总行数
			for(Map.Entry<byte[], Long> entry:results.entrySet() ){
				total += entry.getValue();
				System.out.println("Region: "+Bytes.toString(entry.getKey()) +
						", Count: "+entry.getValue());
			}
			
			System.out.println("Total Count: "+total);
			
			
		} catch (Throwable e) {
			// TODO Auto-generated catch block
			e.printStackTrace();
		}	
	}
}
运行结果如下,可以看出test表共由三个region组成,每个region拥有的行数分别为9,13,78

13/01/26 18:59:53 INFO zookeeper.ClientCnxn: Opening socket connection to server master/172.21.15.21:2181. Will not attempt to authenticate using SASL (无法定位登录配置)
13/01/26 18:59:53 INFO zookeeper.ClientCnxn: Socket connection established to master/172.21.15.21:2181, initiating session
13/01/26 18:59:53 INFO zookeeper.ClientCnxn: Session establishment complete on server master/172.21.15.21:2181, sessionid = 0x13c6a82639f000c, negotiated timeout = 40000
Region: test,,1358337586380.f3e04b8b43d073a509e9a374f643277a., Count: 9
Region: test,209,1358337769870.be5a99319eca6f2881ccd73789bfafb0., Count: 13
Region: test,222,1358337769870.94685f417a95e91d0c9185a95974f866., Count: 78
Total Count: 100

Batch类提供了一个更方便的方法来获取远程endpoint, Batch.forMethod() ,你可以得到一个配置好的Batch.Call实例用来传递到远程的region servers. 下面对EndPointExample做了修改,看起来是不是好看多了~~
Batch.Call call =Batch.forMethod(RowCountEndPoint.class, "getKeyValueCount");
			
			Map<byte[], Long> results=table.coprocessorExec(
					RowCountProtocol.class,
					null, 
					null, 
					call
			);


然而,通过直接implementing Batch.Call 更加灵活和强大,(you can perform additional processing on the results ,implementing Batch.call directly will provide  more power and  flexibility.)  下面的例子,同时获取rowCount和keyvalueCount

			Map<byte[],Pair<Long,Long>> results=table.coprocessorExec(
					RowCountProtocol.class,
					null,
					null,
					new Batch.Call<RowCountProtocol,Pair<Long,Long>>() {

						@Override
						public Pair<Long, Long> call(RowCountProtocol instance)
								throws IOException {
							// TODO Auto-generated method stub
							return new Pair<Long, Long>(
									instance.getRowCount(),
									instance.getKeyValueCount()
									);
						}	
					}
			);
					
			//
			long totalRows=0;
			long totalKeyValues=0;
			for(Map.Entry<byte[], Pair<Long,Long>> entry:results.entrySet() ){
				
				totalRows+=entry.getValue().getFirst();
				totalKeyValues+=entry.getValue().getSecond();
				
				System.out.println("region="+Bytes.toString(entry.getKey())+
						"  ,  rowCount="+entry.getValue().getFirst()+
						"  ,  keyValueCount="+entry.getValue().getSecond());
			}
			System.out.println("totalRows="+totalRows+
					",totalKeyValues="+totalKeyValues);

当然,我们也可以通过coprocessorProxy()方法获取endpoint的client-side 代理,通过该代理,可以在给定的row key所在的region做你想要的操作 (如果row key不存在,则该对应的region为rowkey范围包含该row key的region)。

RowCountProtocol protocol=table.coprocessorProxy(RowCountProtocol.class, Bytes.toBytes("202"));
			
			long rowsInRegion = protocol.getRowCount();
			
			System.out.println("Region Row Count: "+rowsInRegion);




评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值