RPC框架 远程过程调用
1.服务端启动服务,
2.客户端获取服务端接口协议的代理对象
3.客户端通过代理对象的接口协议,调用服务端提供的功能(直接通过socket协议通信)
hdfs接口协议对象是 org.apache.hadoop.hdfs.protocol.ClientProtocol;
rpc hadoop 动态代理 proxy socket
LoginServiceInterface 接口协议
package cn.itcast.hadoop.rpc;
public interface LoginServiceInterface {
public static final long versionID=1L;
public String login(String username,String password);
}
LoginServiceImpl 接口实现
package cn.itcast.hadoop.rpc;
public class LoginServiceImpl implements LoginServiceInterface {
@Override
public String login(String username, String password) {
return username + " logged in successfully!";
}
}
Starter 服务端启动 RPC.Builder server.start();
package cn.itcast.hadoop.rpc;
import java.io.IOException;
import org.apache.hadoop.HadoopIllegalArgumentException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.ipc.RPC;
import org.apache.hadoop.ipc.RPC.Builder;
import org.apache.hadoop.ipc.RPC.Server;
public class Starter {
public static void main(String[] args) throws HadoopIllegalArgumentException, IOException {
Builder builder = new RPC.Builder(new Configuration());
builder.setBindAddress("cch")
.setPort(10096)
.setProtocol(LoginServiceInterface.class)
.setInstance(new LoginServiceImpl());
//builder.setSecretManager(new TokenIdentifier)
Server server = builder.build();
server.start();
}
}
LoginController 通过proxy代理对象,调用server
package cn.itcast.hadoop.rpc;
import java.net.InetSocketAddress;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdfs.DFSClient;
import org.apache.hadoop.hdfs.protocol.ClientProtocol;
import org.apache.hadoop.ipc.RPC;
public class LoginController {
public static void main(String[] args) throws Exception {
//ClientProtocol
//DFSClient
LoginServiceInterface proxy = RPC.getProxy(LoginServiceInterface.class,
1L,
new InetSocketAddress("cch", 10096),
new Configuration());
String result = proxy.login("mijie", "123456");
System.out.println(result);
RPC.stopProxy(proxy);
}
}
Hadoop系统通信协议介绍(转)
本文约定:
DN: DataNode
TT: TaskTracker
NN: NameNode
SNN: Secondry NameNode
JT: JobTracker
HDFS中有5种协议:
DatanodeProtocol (DN && NN)
InterDatanodeProtocol (DN && DN)
ClientDatanodeProtocol (Client && DN)
ClientProtocol (Client && NN)
NamenodeProtocol (SNN && NN)
Map/Reduce 中有3种协议:
InterTrackerProtocol (TT && JT)
JobSubmissionProtocol (Client && JT)
TaskUmbilicalProtocol (Child && TT)
其中,DatanodeProtocol,ClientDatanodeProtocol,InterTrackerProtocol,TaskUmbilicalProtocol,JobSubmissionProtocol这5种协议通信频繁。
这8种协议在Hadoop中是作为接口存在,8种协议都继承接口VersionedProtocol。
协议的实现主要集中在类: JobTracker, NameNode, TaskTracker, DataNode 这4个类中。
ClientProtocol (Client & NN)
协议简介:
Client–>NN: 用户进行文件操作,文件系统操作,系统管理和问题处理的接口