今天来分析一下协处理处理过程。
参考RPC:http://blog.csdn.net/chenfenggang/article/details/75268998
首先必须了解什么是RPC,以及一般的RPC的几个关键点。
1)服务端接口,
2)服务端实现类
3)服务端进程
4)客户端stub (包括连接的Socket)
这里将分析客户端是怎么找到服务端的接口的。包括怎么获取socket(stub)的
这次的案例
RowCountEndpoint 这个是官方给案例在example 下面。
public class RowCountEndpoint extends ExampleProtos.RowCountService
implements Coprocessor, CoprocessorService {
private RegionCoprocessorEnvironment env;
public RowCountEndpoint() {
}
@Override
public Service getService() {
return this;
}
@Override
public void getRowCount(RpcController controller, ExampleProtos.CountRequest request,
RpcCallback<ExampleProtos.CountResponse> done) {
Scan scan = new Scan();
scan.setFilter(new FirstKeyOnlyFilter());
ExampleProtos.CountResponse response = null;
InternalScanner scanner = null;
try {
scanner = env.getRegion().getScanner(scan);
List<Cell> results = new ArrayList<Cell>();
boolean hasMore = false;
byte[] lastRow = null;
long count = 0;
do {
hasMore = scanner.next(results);
for (Cell kv : results) {
byte[] currentRow = CellUtil.cloneRow(kv);
if (lastRow == null || !Bytes.equals(lastRow, currentRow)) {
lastRow = currentRow;
count++;
}
}
results.clear();
} while (hasMore);
response = ExampleProtos.CountResponse.newBuilder()
.setCount(count).build();
} catch (IOException ioe) {
ResponseConverter.setControllerException(controller, ioe);
} finally {
if (scanner != null) {
try {
scanner.close();
} catch (IOException ignored) {}
}
}
done.run(response);
}
@Override
public void start(CoprocessorEnvironment env) throws IOException {
if (env instanceof RegionCoprocessorEnvironment) {
this.env = (RegionCoprocessorEnvironment)env;
} else {
throw new CoprocessorException("Must be loaded on a table region!");
}
}
@Override
public void stop(CoprocessorEnvironment env) throws IOException {
// nothing to do
}
}
protos//
ExampleProtos.proto
package hbase.pb;
option java_package = "org.apache.hadoop.hbase.coprocessor.example.generated";
option java_outer_classname = "ExampleProtos";
option java_generic_services = true;
option java_generate_equals_and_hash = true;
option optimize_for = SPEED;
message CountRequest {
}
message CountResponse {
required int64 count = 1 [default = 0];
}
service RowCountService {
rpc getRowCount(CountRequest)
returns (CountResponse);
}
自己写的简单客户端代码
public static void main(String[] args) throws Throwable {
Configuration configuration = HBaseConfiguration.create();
Connection connection = ConnectionFactory.createConnection(configuration);
Table table = connection.getTable(TableName.META_TABLE_NAME);
//这里定义了一个callable 将在下面的方法中使用,
ExampleProtos.RowCountService.class提供的是接口table.coprocessorService(ExampleProtos.RowCountService.class, null, null, new Batch.Call<RowCountEndpoint, Object>() {
@Override
public Object call(ExampleProtos.RowCountService instance) throws IOException {
RpcController controller= new ServerRpcController();
ExampleProtos.CountRequest request = ExampleProtos.CountRequest.newBuilder().build() ;
BlockingRpcCallback<org.apache.hadoop.hbase.coprocessor.example.generated.ExampleProtos.CountResponse> done=
new BlockingRpcCallback();
instance.getKeyValueCount(controller,request,done);
return done.get().getCount();
}
});
}
coprocessorService的里面主要调用Htable里的这个方法
public <T extends Service, R> void coprocessorService(final Class<T> service,
byte[] startKey, byte[] endKey, final Batch.Call<T,R> callable,
final Batch.Callback<R> callback) throws ServiceException, Throwable {
// 获取keys的范围,根据keys的个数可以获取的region的个数。将向每一个region发送一个请求,分别求count
List<byte[]> keys = getStartKeysInRange(startKey, endKey);
Map<byte[],Future<R>> futures =
new TreeMap<byte[],Future<R>>(Bytes.BYTES_COMPARATOR);
for (final byte[] r : keys) {
//这个定义了一个
channel ,里面包含connection 和rpcFactory ,注意这个channel的实现类。final RegionCoprocessorRpcChannel channel =
new RegionCoprocessorRpcChannel(connection, tableName, r);
//submit 提交会掉用call 方法
Future<R> future = pool.submit(
new Callable<R>() {
@Override
public R call() throws Exception {
//这里new 了一个instance 为
ExampleProtos.proto 定义的服务,里面有个“newStub”参数。将调用proto 生成的代码。- //如下面方法
T instance = ProtobufUtil.newServiceStub(service, channel);
//这里是调用上面的callable的call方法 //instance 即为
RowCountEndpoint,上一个方法中使用- //了RowCountEndpoint.getKeyValueCount
R result = callable.call(instance);
byte[] region = channel.getLastRegion();
if (callback != null) {
callback.update(region, r, result);
}
return result;
}
});
futures.put(r, future);
}
for (Map.Entry<byte[],Future<R>> e : futures.entrySet()) {
try {
e.getValue().get();
}
}
上面的
ProtobufUtil
.
newServiceStub
(
service
,
channel
);
会调用这个这样会将包含connection等数据给
instance的
stub
public static Stub newStub(
com.google.protobuf.RpcChannel channel) {
return new Stub(channel);
}
很多时候以为这样就已经可以服务端的接口了,其实不是的应该是调用下面的方法,是客户端的实现
public void getKeyValueCount(
com.google.protobuf.RpcController controller,
CountRequest request,
com.google.protobuf.RpcCallback<CountResponse> done) {
channel.callMethod(
getDescriptor().getMethods().get(1),
controller,
request,
CountResponse.getDefaultInstance(),
com.google.protobuf.RpcUtil.generalizeCallback(
done,
CountResponse.class,
CountResponse.getDefaultInstance()));
}
然后调用这个
- public void callMethod(Descriptors.MethodDescriptor method,
RpcController controller,
Message request, Message responsePrototype,
RpcCallback<Message> callback) {
response = callExecService(controller, method, request, responsePrototype);
}
根据上面channel的实现类
RegionCoprocessorRpcChannel
@Override
protected Message callExecService(RpcController controller,
Descriptors.MethodDescriptor method, Message request, Message responsePrototype)
throws IOException {
//省略很多
CoprocessorServiceResponse result = rpcCallerFactory.<CoprocessorServiceResponse> newCaller()
.callWithRetries(callable, operationTimeout);
//省略很多
}
如果看过的代码关于第五章 regionLocator 获取region过程 详解就知道已经出来了。
callWithRetries的
callable.prepare(tries != 0); // if called with false, check table status on ZK
interceptor.intercept(context.prepare(callable, tries));
return callable.call(getTimeout(callTimeout));
- /准备阶段设置信息。
- setStub(cConnection.getClient(dest));
//call阶段调用
ClientProtos.GetResponse response = getStub().get(controller, request);
后面一大堆的问题就不再讨论了。基本可以说结束