HBase查询(2)---Dedicated Filters专用过滤器

2.1 SingleColumnValueFilter

选定列簇和某一列,然后与列的value相比,正确的返回全部的row,注意如果某一行不含有该列,同样返回,除非通过filterIfColumnMissing 设置成真。

构造函数

SingleColumnValueFilter(byte[] family, byte[] qualifier, CompareOp compareOp, byte[] value)   
SingleColumnValueFilter(byte[] family, byte[] qualifier, CompareOp compareOp, WritableByteArrayComparable comparator)

第一个构造函数相当于构建了一个BinaryComparator的实例。其他的跟CompareFilter的参数含义一样。  

该过滤器有两个参数重要参数 filterIfMissing,latestVersionOnly其get、set方法如下:

boolean getFilterIfMissing()  
void setFilterIfMissing(boolean filterIfMissing)  
boolean getLatestVersionOnly()  
void setLatestVersionOnly(boolean latestVersionOnly)  

如果 filterIfColumnMissing 标志设为真,如果该行没有指定的列,那么该行的所有列将不返回。缺省值为假(通过该过滤器进行查询时,不进行设置时默认会返回指定列为空的行)。
如果setLatestVersionOnly 标志设为假,将检查此前的版本。缺省值为真。实例如下:

package filter;

import java.io.IOException;

import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.filter.BinaryComparator;
import org.apache.hadoop.hbase.filter.CompareFilter;
import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
import org.apache.hadoop.hbase.util.Bytes;

public class SingleColumnValueFilterExample extends FilterExampleBase {

	public static void main(String[] args) throws IOException {
		new SingleColumnValueFilterExample().doMain();
	}

	public void testFilter() throws IOException {
		System.out.println("---------testFilter-----------");
		SingleColumnValueFilter filter = new SingleColumnValueFilter(Bytes.toBytes("c1"), Bytes.toBytes("a"), CompareFilter.CompareOp.EQUAL,
				new BinaryComparator(Bytes.toBytes("c1a")));
		filter.setFilterIfMissing(true);

		Scan scan = new Scan();
		scan.setFilter(filter);
		ResultScanner scanner = table.getScanner(scan);
		showScanner(scanner);
		scanner.close();

		System.out.println("-----------------------------");
		Get get = new Get(Bytes.toBytes("r1"));
		get.setFilter(filter);
		Result result = table.get(get);
		showResult(result);
	}
}
结果:
start create table......
test is exist delete!!!
create table over......
start insert......
end insert......


-----------------findAll() start-------------
{rowkey:r1 c1:a=r1c1a c1:b=r1c1b c2:a=r1c2a c2:b=r1c2b}
{rowkey:r2 c1:a=r2c1a c2:b=r2c2b}
{rowkey:r3 c1:a=c1a c1:b=r3c1b c2:a=r3c2b c2:b=r3c2b}
{rowkey:r4 c1:a=c1a c1:b=r4c1b c2:a=r4c2b c2:b=r4c2b}
-----------------findAll() end-------------


---------testFilter-----------
{rowkey:r3 c1:a=c1a c1:b=r3c1b c2:a=r3c2b c2:b=r3c2b}
{rowkey:r4 c1:a=c1a c1:b=r4c1b c2:a=r4c2b c2:b=r4c2b}
-----------------------------


2.2 SingleColumnValueExcludeFilter

该过滤器同上面的过滤器正好相反,如果条件相符,将不会返回该列的内容。

2.3 PrefixFilter

所有的row的实例匹配prefix的时候返回结果集合(行键前缀过滤器)

package filter;

import java.io.IOException;

import org.apache.hadoop.hbase.KeyValue;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.filter.Filter;
import org.apache.hadoop.hbase.filter.PrefixFilter;
import org.apache.hadoop.hbase.util.Bytes;

public class PrefixFilterExample extends FilterExampleBase {

	public static void main(String[] args) throws IOException {
		new PrefixFilterExample().doMain();
	}

	public void testFilter() throws IOException {
		Filter filter = new PrefixFilter(Bytes.toBytes("r"));
		Scan scan = new Scan();
		scan.setFilter(filter);
		ResultScanner scanner = table.getScanner(scan);
		showScanner(scanner);
		scanner.close();

		showLine();
		Get get = new Get(Bytes.toBytes("r4"));
		get.setFilter(filter);
		Result r = table.get(get);
		showResult(r);
	}
}

结果:
start create table......
test is exist delete!!!
create table over......
start insert......
end insert......


-----------------findAll() start-------------
{rowkey:r1 c1:a=r1c1a c1:b=r1c1b c2:a=r1c2a c2:b=r1c2b}
{rowkey:r2 c1:a=r2c1a c2:b=r2c2b}
{rowkey:r3 c1:a=c1a c1:b=r3c1b c2:a=r3c2b c2:b=r3c2b}
{rowkey:r4 c1:a=c1a c1:b=r4c1b c2:a=r4c2b c2:b=r4c2b}
-----------------findAll() end-------------


{rowkey:r1 c1:a=r1c1a c1:b=r1c1b c2:a=r1c2a c2:b=r1c2b}
{rowkey:r2 c1:a=r2c1a c2:b=r2c2b}
{rowkey:r3 c1:a=c1a c1:b=r3c1b c2:a=r3c2b c2:b=r3c2b}
{rowkey:r4 c1:a=c1a c1:b=r4c1b c2:a=r4c2b c2:b=r4c2b}
-----------------
{rowkey:r4 c1:a=c1a c1:b=r4c1b c2:a=r4c2b c2:b=r4c2b}


2.4 PageFilter

页过滤,通过设置pagesize参数可以返回每一页page的数量。

客户端需要记住上一次访问的row的key值。

package filter;

import java.io.IOException;

import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.filter.Filter;
import org.apache.hadoop.hbase.filter.PageFilter;
import org.apache.hadoop.hbase.util.Bytes;

public class PageFilterExample extends FilterExampleBase {

	public static void main(String[] args) throws IOException {
		new PageFilterExample().doMain();
	}

	@Override
	public void testFilter() throws IOException {
		final byte[] POSTFIX = new byte[] { 0x00 };
		HTable table = new HTable(config, tableName);
		Filter filter = new PageFilter(3);
		byte[] lastRow = null;
		int totalRows = 0;
		while (true) {
			Scan scan = new Scan();
			scan.setFilter(filter);
			if (lastRow != null) {
				// 注意这里添加了POSTFIX操作,不然死循环了
				byte[] startRow = Bytes.add(lastRow, POSTFIX);
				scan.setStartRow(startRow);
			}
			ResultScanner scanner = table.getScanner(scan);
			int localRows = 0;
			Result result;
			while ((result = scanner.next()) != null) {
				localRows++;
				showResult(result);
				totalRows++;
				lastRow = result.getRow();
			}
			scanner.close();
			System.out.println("localRows=" + localRows);
			if (localRows == 0) {
				break;
			}
		}
		System.out.println("total rows:" + totalRows);
	}

}

结果:
start create table......
test is exist delete!!!
create table over......
start insert......
end insert......


-----------------findAll() start-------------
{rowkey:r1 c1:a=r1c1a c1:b=r1c1b c2:a=r1c2a c2:b=r1c2b}
{rowkey:r2 c1:a=r2c1a c2:b=r2c2b}
{rowkey:r3 c1:a=c1a c1:b=r3c1b c2:a=r3c2b c2:b=r3c2b}
{rowkey:r4 c1:a=c1a c1:b=r4c1b c2:a=r4c2b c2:b=r4c2b}
-----------------findAll() end-------------


{rowkey:r1 c1:a=r1c1a c1:b=r1c1b c2:a=r1c2a c2:b=r1c2b}
{rowkey:r2 c1:a=r2c1a c2:b=r2c2b}
{rowkey:r3 c1:a=c1a c1:b=r3c1b c2:a=r3c2b c2:b=r3c2b}
localRows=3
{rowkey:r4 c1:a=c1a c1:b=r4c1b c2:a=r4c2b c2:b=r4c2b}
localRows=1
localRows=0
total rows:4

因为hbase的row是字典序列排列的,因此上一次的lastrow需要添加额外的0表示新的开始。另外startKey的那一行是包含在scan里面的。

 

2.5 KeyOnlyFilter

因为一些应用只想获取data数据(rowkey、family、qualifier相关信息),而不是真实的val,可以使用这个过滤器。该过滤器通过

KeyOnlyFilter(boolean lenAsVal)  

lenAsVal默认为假,表示不把val的长度作为val。否则val的长度将作为val输出。这个类是中的transform方法是filter接口中transform的一个实现(当lenAsVal为true时返回的val的长度作为val)。

package filter;

import java.io.IOException;

import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.filter.Filter;
import org.apache.hadoop.hbase.filter.KeyOnlyFilter;

public class KeyOnlyFilterExample extends FilterExampleBase {

	public static void main(String[] args) throws IOException {
		new KeyOnlyFilterExample().doMain();
	}

	public void testFilter() throws IOException {
	        HTable table = new HTable(config, tableName);  
	        Filter filter = new KeyOnlyFilter(false);  
	        Scan scan = new Scan();  
	        scan.setFilter(filter);  
	        ResultScanner scanner = table.getScanner(scan);  
	        showScanner(scanner);
	        scanner.close();
	}
}

结果:
start create table......
test is exist delete!!!
create table over......
start insert......
end insert......


-----------------findAll() start-------------
{rowkey:r1 c1:a=r1c1a c1:b=r1c1b c2:a=r1c2a c2:b=r1c2b}
{rowkey:r2 c1:a=r2c1a c2:b=r2c2b}
{rowkey:r3 c1:a=c1a c1:b=r3c1b c2:a=r3c2b c2:b=r3c2b}
{rowkey:r4 c1:a=c1a c1:b=r4c1b c2:a=r4c2b c2:b=r4c2b}
-----------------findAll() end-------------


{rowkey:r1 c1:a= c1:b= c2:a= c2:b=}
{rowkey:r2 c1:a= c2:b=}
{rowkey:r3 c1:a= c1:b= c2:a= c2:b=}
{rowkey:r4 c1:a= c1:b= c2:a= c2:b=}

2.6 FirstKeyOnlyFilter

在对hbase的表进行扫描的时候,如果指定了FirstKeyOnlyFilter过滤条件则仅仅会返回相同key的第一条kv(只返回第一列的值,如sql中的select count(*) from tablename)。
当对hbase中的表进行count,sum操作等集合操作的时候,使用FirstKeyOnlyFilter会带来性能上的提升。

package filter;

import java.io.IOException;

import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.filter.Filter;
import org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter;

public class FirstKeyOnlyFilterExample extends FilterExampleBase {

	public static void main(String[] args) throws IOException {
		new FirstKeyOnlyFilterExample().doMain();
	}

	public void testFilter() throws IOException {
        HTable table = new HTable(config, tableName);  
        Filter filter = new FirstKeyOnlyFilter();  
        Scan scan = new Scan();  
        scan.setFilter(filter);  
        ResultScanner scanner = table.getScanner(scan);  
	    showScanner(scanner);
	    scanner.close();
    }  

}

结果:
start create table......
test is exist delete!!!
create table over......
start insert......
end insert......


-----------------findAll() start-------------
{rowkey:r1 c1:a=r1c1a c1:b=r1c1b c2:a=r1c2a c2:b=r1c2b}
{rowkey:r2 c1:a=r2c1a c2:b=r2c2b}
{rowkey:r3 c1:a=c1a c1:b=r3c1b c2:a=r3c2b c2:b=r3c2b}
{rowkey:r4 c1:a=c1a c1:b=r4c1b c2:a=r4c2b c2:b=r4c2b}
-----------------findAll() end-------------


{rowkey:r1 c1:a=r1c1a}
{rowkey:r2 c1:a=r2c1a}
{rowkey:r3 c1:a=c1a}
{rowkey:r4 c1:a=c1a}

2.7 InclusiveStopFilter

因为hbase的scan包含start-row不包含stop-row 如果使用这个过滤器我们可以包含stop-row,相当于用于设置stop-row

package filter;

import java.io.IOException;

import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.filter.Filter;
import org.apache.hadoop.hbase.filter.InclusiveStopFilter;
import org.apache.hadoop.hbase.util.Bytes;

public class InclusiveStopFilterExample extends FilterExampleBase {

	public static void main(String[] args) throws IOException {
		new InclusiveStopFilterExample().doMain();
	}

	public void testFilter() throws IOException {
		HTable table = new HTable(config, tableName);
		
		Scan scan = new Scan();
		scan.setStartRow(Bytes.toBytes("r2"));
		ResultScanner scanner = table.getScanner(scan);
		showScanner(scanner);
		scanner.close();
		
		showLine();
		Filter filter1 = new InclusiveStopFilter(Bytes.toBytes("r1"));
		Scan scan1 = new Scan();
		scan1.setFilter(filter1);
		scan1.setStartRow(Bytes.toBytes("r2"));
		ResultScanner scanner1 = table.getScanner(scan1);
		showScanner(scanner1);
		scanner1.close();
		
		showLine();
		Filter filter2 = new InclusiveStopFilter(Bytes.toBytes("r2"));
		Scan scan2 = new Scan();
		scan2.setFilter(filter2);
		scan2.setStartRow(Bytes.toBytes("r2"));
		ResultScanner scanner2 = table.getScanner(scan2);
		showScanner(scanner2);
		scanner2.close();
		
		showLine();
		Filter filter3 = new InclusiveStopFilter(Bytes.toBytes("r3"));
		Scan scan3 = new Scan();
		scan3.setFilter(filter3);
		scan3.setStartRow(Bytes.toBytes("r2"));
		ResultScanner scanner3 = table.getScanner(scan3);
		showScanner(scanner3);
		scanner3.close();

	}
}

结果:
start create table......
test is exist delete!!!
create table over......
start insert......
end insert......


-----------------findAll() start-------------
{rowkey:r1 c1:a=r1c1a c1:b=r1c1b c2:a=r1c2a c2:b=r1c2b}
{rowkey:r2 c1:a=r2c1a c2:b=r2c2b}
{rowkey:r3 c1:a=c1a c1:b=r3c1b c2:a=r3c2b c2:b=r3c2b}
{rowkey:r4 c1:a=c1a c1:b=r4c1b c2:a=r4c2b c2:b=r4c2b}
-----------------findAll() end-------------


{rowkey:r2 c1:a=r2c1a c2:b=r2c2b}
{rowkey:r3 c1:a=c1a c1:b=r3c1b c2:a=r3c2b c2:b=r3c2b}
{rowkey:r4 c1:a=c1a c1:b=r4c1b c2:a=r4c2b c2:b=r4c2b}
-----------------
-----------------
{rowkey:r2 c1:a=r2c1a c2:b=r2c2b}
-----------------
{rowkey:r2 c1:a=r2c1a c2:b=r2c2b}
{rowkey:r3 c1:a=c1a c1:b=r3c1b c2:a=r3c2b c2:b=r3c2b}

从示例可见:当不用此Filter时,结果会显示从r2开始的所有行,当设置startrow>endrow时不返回,当startrow=endrow时会仅返回一条,当startrow<endrow时会返回startrow到endrow之间所有的行。 

2.8 TimestampsFilter

当访问某个Timestamp的新闻的时候,我们需要如下的代码:

TimestampsFilter(List<Long> timestamps)  

接受的参数的list参数,该Filter也可以和scan.setTimeRange混合使用。例如:

package filter;

import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.KeyValue;
import org.apache.hadoop.hbase.MasterNotRunningException;
import org.apache.hadoop.hbase.ZooKeeperConnectionException;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.HTablePool;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.filter.BinaryComparator;
import org.apache.hadoop.hbase.filter.CompareFilter;
import org.apache.hadoop.hbase.filter.Filter;
import org.apache.hadoop.hbase.filter.QualifierFilter;
import org.apache.hadoop.hbase.filter.TimestampsFilter;
import org.apache.hadoop.hbase.util.Bytes;

public class TimestampsFilterExample extends FilterExampleBase2 {

	public static void main(String[] args) throws IOException {
		new TimestampsFilterExample().doMain();
	}

	public void testFilter() throws IOException {
		List<Long> ts = new ArrayList<Long>();
		ts.add(new Long(5));
		ts.add(new Long(10));
		ts.add(new Long(15));
		Filter filter = new TimestampsFilter(ts);

		Scan scan1 = new Scan();
		scan1.setFilter(filter);
		ResultScanner scanner1 = table.getScanner(scan1);
		showScanner(scanner1);
		scanner1.close();
		
		showLine();
		Scan scan2 = new Scan();
		scan2.setFilter(filter);
		scan2.setTimeRange(8, 12);
		ResultScanner scanner2 = table.getScanner(scan2);
		showScanner(scanner2);
		scanner2.close();

	}
}


class FilterExampleBase2 {

	public static Configuration config;
	public static String tableName = "test";
	public static HBaseAdmin hBaseAdmin;
	public static HTablePool pool;
	public static HTable table;

	public static void setup() throws MasterNotRunningException, 
                                    ZooKeeperConnectionException {
		config = HBaseConfiguration.create();
		config.set("hbase.zookeeper.quorum", "10.10.4.55");
		config.set("hbase.zookeeper.property.clientPort", "2181");

		hBaseAdmin = new HBaseAdmin(config);
		pool = new HTablePool(config, 1000);
		table = (HTable) pool.getTable(tableName);
	}

	public static void createTable() throws IOException {
		System.out.println("start create table......");
		if (hBaseAdmin.tableExists(tableName)) {
			hBaseAdmin.disableTable(tableName);
			hBaseAdmin.deleteTable(tableName);
			System.out.println(tableName + " is exist delete!!!");
		}
		HTableDescriptor hTableDescriptor = new HTableDescriptor(tableName);
		hTableDescriptor.addFamily(new HColumnDescriptor("c1"));
		hTableDescriptor.addFamily(new HColumnDescriptor("c2"));

		hBaseAdmin.createTable(hTableDescriptor);
		System.out.println("create table over......");
	}

	public static void dropTable() throws IOException {
		System.out.println("start drop table......");
		if (hBaseAdmin.tableExists(tableName)) {
			hBaseAdmin.disableTable(tableName);
			hBaseAdmin.deleteTable(tableName);
			System.out.println(tableName + " is exist delete!!!");
		}
		System.out.println("end drop table......");
	}

	public static void insert() throws IOException {
		System.out.println("start insert......");

		Put put = new Put("r1".getBytes(), 5);
		put.add("c1".getBytes(), "a".getBytes(), "r1c1a".getBytes());
		put.add("c1".getBytes(), "b".getBytes(), "r1c1b".getBytes());
		put.add("c2".getBytes(), "a".getBytes(), "r1c2a".getBytes());
		put.add("c2".getBytes(), "b".getBytes(), "r1c2b".getBytes());
		table.put(put);
		Put put2 = new Put("r2".getBytes(), 9);
		put2.add("c1".getBytes(), "a".getBytes(), "r2aaa".getBytes());
		put2.add("c1".getBytes(), "a".getBytes(), "r2c1a".getBytes());
		put2.add("c2".getBytes(), "b".getBytes(), "r2bbb".getBytes());
		put2.add("c2".getBytes(), "b".getBytes(), "r2c2b".getBytes());
		table.put(put2);

		Put put3 = new Put("r3".getBytes(), 10);
		put3.add("c1".getBytes(), "a".getBytes(), "c1a".getBytes());
		put3.add("c1".getBytes(), "b".getBytes(), "r3c1b".getBytes());
		put3.add("c2".getBytes(), "a".getBytes(), "r3c2b".getBytes());
		put3.add("c2".getBytes(), "b".getBytes(), "r3c2b".getBytes());

		table.put(put3);
		Put put4 = new Put("r4".getBytes(), 15);
		put4.add("c1".getBytes(), "a".getBytes(), "c1a".getBytes());
		put4.add("c1".getBytes(), "b".getBytes(), "r4c1b".getBytes());
		put4.add("c2".getBytes(), "a".getBytes(), "r4c2b".getBytes());
		put4.add("c2".getBytes(), "b".getBytes(), "r4c2b".getBytes());
		table.put(put4);
		System.out.println("end insert......");
	}

	public static void findAll() throws IOException {
		System.out.println("\n\n-----------------findAll() start-------------");
		HTablePool pool = new HTablePool(config, 1000);
		HTable table = (HTable) pool.getTable(tableName);

		ResultScanner rs = table.getScanner(new Scan());
		for (Result r : rs) {
			System.out.print("{rowkey:" + new String(r.getRow()));
			for (KeyValue keyValue : r.raw()) {
				System.out.print(" " + new String(keyValue.getFamily()) + ":" 
                               + new String(keyValue.getQualifier()) + "="
						+ new String(keyValue.getValue()));
			}
			System.out.println("}");
		}
		System.out.println("-----------------findAll() end-------------\n\n");
	}

	public static void cleanup() throws IOException {
		table.close();
		pool.close();
		hBaseAdmin.close();
	}

//	public static void main(String[] args) throws IOException {
//		new FilterExampleBase().doMain();
//	}
	
	public void doMain() throws IOException {
		setup();

		createTable();

		insert();

		findAll();

		testFilter();

		cleanup();
	}
	
	public void testFilter() throws IOException {
		HTable table = new HTable(config, tableName);

		Filter filter = new QualifierFilter(CompareFilter.CompareOp.LESS, new BinaryComparator(Bytes.toBytes("b")));

		Scan scan = new Scan();
		scan.setFilter(filter);
		ResultScanner scanner = table.getScanner(scan);
		showScanner(scanner);
		scanner.close();

		System.out.println("-----------------------");
		Get get = new Get(Bytes.toBytes("r4"));
		get.setFilter(filter);
		Result r = table.get(get);
		showResult(r);
	}
	
	public static void showScanner(ResultScanner scanner) {
		for (Result r : scanner) {
			System.out.print("{rowkey:" + new String(r.getRow()));
			for (KeyValue keyValue : r.raw()) {
				System.out.print(" " + new String(keyValue.getFamily()) + ":" 
                               + new String(keyValue.getQualifier()) + "="
						+ new String(keyValue.getValue()));
			}
			System.out.println("}");
		}
	}
	
	public static void showResult(Result r) {
		if(r == null || r.getRow() == null) {
			return;
		}
		System.out.print("{rowkey:" + new String(r.getRow()));
		for (KeyValue keyValue : r.raw()) {
			System.out.print(" " + new String(keyValue.getFamily()) + ":" 
                          + new String(keyValue.getQualifier()) + "="
					+ new String(keyValue.getValue()));
		}
		System.out.println("}");
	}
	
	public static void showLine() {
		System.out.println("-----------------");
	}
}

结果:
start create table......
test is exist delete!!!
create table over......
start insert......
end insert......


-----------------findAll() start-------------
{rowkey:r1 c1:a=r1c1a c1:b=r1c1b c2:a=r1c2a c2:b=r1c2b}
{rowkey:r2 c1:a=r2c1a c2:b=r2c2b}
{rowkey:r3 c1:a=c1a c1:b=r3c1b c2:a=r3c2b c2:b=r3c2b}
{rowkey:r4 c1:a=c1a c1:b=r4c1b c2:a=r4c2b c2:b=r4c2b}
-----------------findAll() end-------------


{rowkey:r1 c1:a=r1c1a c1:b=r1c1b c2:a=r1c2a c2:b=r1c2b}
{rowkey:r3 c1:a=c1a c1:b=r3c1b c2:a=r3c2b c2:b=r3c2b}
{rowkey:r4 c1:a=c1a c1:b=r4c1b c2:a=r4c2b c2:b=r4c2b}
-----------------
{rowkey:r3 c1:a=c1a c1:b=r3c1b c2:a=r3c2b c2:b=r3c2b}

2.9  ColumnCountGetFilter

此类doc说明如下:

Simple filter that returns first Ncolumns on row only.

This filter was written to testfilters in Get and as soon as it gets

its quota of columns, {@link #filterAllRemaining()} returns true. This

makes thisfilter unsuitable as a Scan filter.

也就是说此类使用在Scan上不合适,用在Get上用于限制返回的列数。

package filter;

import java.io.IOException;

import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.filter.ColumnCountGetFilter;
import org.apache.hadoop.hbase.filter.Filter;

public class ColumnCountGetFilterExample extends FilterExampleBase {

	public static void main(String[] args) throws IOException {
		new ColumnCountGetFilterExample().doMain();
	}

	public void testFilter() throws IOException {
		Filter filter = new ColumnCountGetFilter(3);

		Scan scan1 = new Scan();
//		scan1.setFilter(filter);
		ResultScanner scanner1 = table.getScanner(scan1);
		showScanner(scanner1);
		scanner1.close();
		
		showLine();
		Get get = new Get("r4".getBytes());
		get.setFilter(filter);
		Result r = table.get(get);
		showResult(r);
	}
}

结果:
start create table......
test is exist delete!!!
create table over......
start insert......
end insert......


-----------------findAll() start-------------
{rowkey:r1 c1:a=r1c1a c1:b=r1c1b c2:a=r1c2a c2:b=r1c2b}
{rowkey:r2 c1:a=r2c1a c2:b=r2c2b}
{rowkey:r3 c1:a=c1a c1:b=r3c1b c2:a=r3c2b c2:b=r3c2b}
{rowkey:r4 c1:a=c1a c1:b=r4c1b c2:a=r4c2b c2:b=r4c2b}
-----------------findAll() end-------------


{rowkey:r1 c1:a=r1c1a c1:b=r1c1b c2:a=r1c2a c2:b=r1c2b}
{rowkey:r2 c1:a=r2c1a c2:b=r2c2b}
{rowkey:r3 c1:a=c1a c1:b=r3c1b c2:a=r3c2b c2:b=r3c2b}
{rowkey:r4 c1:a=c1a c1:b=r4c1b c2:a=r4c2b c2:b=r4c2b}
-----------------
{rowkey:r4 c1:a=c1a c1:b=r4c1b c2:a=r4c2b}

2.10 ColumnPaginationFilter

doc文档解释如下:

A filter, based on theColumnCountGetFilter, takes two arguments: limit and offset. This filter can beused for row-based indexing, where references to other tables are stored acrossmany columns, in order to efficient lookups and paginated results for endusers. Only most recent versions are considered for pagination.

它用来过滤返回的列,从offset开始的limit个列,此类也可以用来分页。

package filter;

import java.io.IOException;

import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.filter.ColumnPaginationFilter;
import org.apache.hadoop.hbase.filter.Filter;

public class ColumnPaginationFilterExample extends FilterExampleBase {

	public static void main(String[] args) throws IOException {
		new ColumnPaginationFilterExample().doMain();
	}

	public void testFilter() throws IOException {
		Filter filter = new ColumnPaginationFilter(2, 0);

		Scan scan = new Scan();
		ResultScanner scanner = table.getScanner(scan);
		showScanner(scanner);
		scanner.close();

		showLine();
		Scan scan2 = new Scan();
		scan2.setFilter(filter);
		ResultScanner scanner2 = table.getScanner(scan2);
		showScanner(scanner2);
		scanner.close();
	}
}

结果:
start create table......
test is exist delete!!!
create table over......
start insert......
end insert......


-----------------findAll() start-------------
{rowkey:r1 c1:a=r1c1a c1:b=r1c1b c2:a=r1c2a c2:b=r1c2b}
{rowkey:r2 c1:a=r2c1a c2:b=r2c2b}
{rowkey:r3 c1:a=c1a c1:b=r3c1b c2:a=r3c2b c2:b=r3c2b}
{rowkey:r4 c1:a=c1a c1:b=r4c1b c2:a=r4c2b c2:b=r4c2b}
-----------------findAll() end-------------


{rowkey:r1 c1:a=r1c1a c1:b=r1c1b c2:a=r1c2a c2:b=r1c2b}
{rowkey:r2 c1:a=r2c1a c2:b=r2c2b}
{rowkey:r3 c1:a=c1a c1:b=r3c1b c2:a=r3c2b c2:b=r3c2b}
{rowkey:r4 c1:a=c1a c1:b=r4c1b c2:a=r4c2b c2:b=r4c2b}
-----------------
{rowkey:r1 c1:a=r1c1a c1:b=r1c1b}
{rowkey:r2 c1:a=r2c1a c2:b=r2c2b}
{rowkey:r3 c1:a=c1a c1:b=r3c1b}
{rowkey:r4 c1:a=c1a c1:b=r4c1b}

2.11 ColumnPrefixFilter

跟prefxiFilter相似,只是改成了列修饰符,示例如下:

package filter;

import java.io.IOException;

import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.filter.ColumnPrefixFilter;
import org.apache.hadoop.hbase.filter.Filter;
import org.apache.hadoop.hbase.util.Bytes;

public class ColumnPrefixFilterExample extends FilterExampleBase {

	public static void main(String[] args) throws IOException {
		new ColumnPrefixFilterExample().doMain();
	}

	public void testFilter() throws IOException {
		Filter filter = new ColumnPrefixFilter(Bytes.toBytes("a"));
		Scan scan = new Scan();
		scan.setFilter(filter);
		ResultScanner scanner = table.getScanner(scan);
		showScanner(scanner);
		scanner.close();

		showLine();
		Get get = new Get(Bytes.toBytes("r4"));
		get.setFilter(filter);
		Result r = table.get(get);
		showResult(r);
	}
}

结果:
start create table......
test is exist delete!!!
create table over......
start insert......
end insert......


-----------------findAll() start-------------
{rowkey:r1 c1:a=r1c1a c1:b=r1c1b c2:a=r1c2a c2:b=r1c2b}
{rowkey:r2 c1:a=r2c1a c2:b=r2c2b}
{rowkey:r3 c1:a=c1a c1:b=r3c1b c2:a=r3c2b c2:b=r3c2b}
{rowkey:r4 c1:a=c1a c1:b=r4c1b c2:a=r4c2b c2:b=r4c2b}
-----------------findAll() end-------------


{rowkey:r1 c1:a=r1c1a c2:a=r1c2a}
{rowkey:r2 c1:a=r2c1a}
{rowkey:r3 c1:a=c1a c2:a=r3c2b}
{rowkey:r4 c1:a=c1a c2:a=r4c2b}
-----------------
{rowkey:r4 c1:a=c1a c2:a=r4c2b}

2.12 RandomRowFilter

随即的返回row的数据,构造函数为

RandomRowFilter(float chance)  

chance取值为0到1.0,如果<0则为空,如果>1则包含所有的行。


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值