Hadoop源码解析
第0章 RPC通信原理解析
0)回顾
1)需求: 模拟RPC的客户端、服务端、通信协议三者如何工作的!
2)代码编写:
(1)在HDFSClient项目基础上创建包名com.atguigu.rpc
(2)创建RPC协议
package com.atguigu.rpc;
public interface RPCProtocol {
long versionID = 666;
void mkdirs(String path);
}
(3)创建RPC服务端
package com.atguigu.rpc;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.ipc.RPC;
import org.apache.hadoop.ipc.Server;
import java.io.IOException;
//实现通讯接口
public class NNServer implements RPCProtocol{
@Override
public void mkdirs(String path) {
System.out.println("服务端,创建路径" + path);
}
public static void main(String[] args) throws IOException {
//启动服务
Server server = new RPC.Builder(new Configuration())
.setBindAddress("localhost")
.setPort(8888)
.setProtocol(RPCProtocol.class)
.setInstance(new NNServer())
.build();
System.out.println("服务器开始工作");
server.start();
}
}
(4)创建RPC客户端
package com.atguigu.rpc;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.ipc.RPC;
import java.io.IOException;
import java.net.InetSocketAddress;
public class HDFSClient {
public static void main(String[] args) throws IOException {
RPCProtocol client = RPC.getProxy(
RPCProtocol.class,
RPCProtocol.versionID,
new InetSocketAddress("localhost", 8888),
new Configuration());
System.out.println("我是客户端");
client.mkdirs("/input");
}
}
3)测试
(1)启动服务端 观察控制台打印:服务器开始工作 在控制台Terminal窗口输入,jps,查看到NNServer服务
(2)启动客户端 观察客户端控制台打印:我是客户端 观察服务端控制台打印:服务端,创建路径/input
4)总结
RPC的客户端调用通信协议方法,方法的执行在服务端;
通信协议就是接口规范。
RPC使用的是netty框架底层是socket
第1章 NameNode启动源码解析
在pom.xml中增加如下依赖
<dependencies>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>3.1.3</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>3.1.3</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs-client</artifactId>
<version>3.1.3</version>
<scope>provided</scope>
</dependency>
</dependencies>
点击两下shift,搜索NameNode选择hadoop-hdfs-3.1.3的依赖包
NameNode官方说明
NameNode serves as both directory namespace manager and “inode table”
for the Hadoop DFS. There is a single NameNode running in any DFS
deployment. (Well, except when there is a second backup/failover
NameNode, or when using federated NameNodes.) The NameNode controls
two critical tables: 1) filename->blocksequence (namespace) 2)
block->machinelist (“inodes”) The first table is stored on disk and is
very precious. The second table is rebuilt every time the NameNode
comes up. ‘NameNode’ refers to both this class as well as the
‘NameNode server’. The ‘FSNamesystem’ class actually performs most of
the filesystem management. The majority of the ‘NameNode’ class itself
is concerned with exposing the IPC interface and the HTTP server to
the outside world, plus some configuration management. NameNode
implements the ClientProtocol interface, which allows clients to ask
for DFS services. ClientProtocol is not designed for direct use by
authors of DFS client code. End-users should instead use the
FileSystem class. NameNode also implements the DatanodeProtocol
interface, used by DataNodes that actually store DFS data blocks.
These methods are invoked repeatedly and automatically by all the
DataNodes in a DFS deployment. NameNode also implements the
NamenodeProtocol interface, used by secondary namenodes or rebalancing
processes to get partial NameNode state, for example partial blocksMap
etc.百度翻译(主要是懒): NameNode同时充当Hadoop DFS的目录命名空间管理器和“inode表”。
在任何DFS部署中都有一个NameNode在运行。(存在第二个备份/故障转移NameNode或使用联合NameNodes时除外。)
NameNode控制两个关键表:
1)filename->blocksequence(名称空间)
2) block->machinelist(“inodes”)第一个表存储在磁盘上,非常珍贵。每次出现NameNode时,都会重新生成第二个表。”“NameNode”既指此类,也指“NameNode服务器”。
“FSNamesystem”类实际上执行大部分文件系统管理。“NameNode”类本身的大部分内容涉及将IPC接口和HTTP服务器公开给外部世界,以及一些配置管理。NameNode实现ClientProtocol接口,允许客户端请求DFS服务。
ClientProtocol不是为DFS客户端代码的作者直接使用而设计的。最终用户应该改为使用文件系统类。NameNode还实现了DatanodeProtocol接口,由实际存储DFS数据块的Datanode使用。
DFS部署中的所有DataNode都会反复自动调用这些方法。NameNode还实现NamenodeProtocol接口,由辅助namenodes或重新平衡过程使用,以获取部分NameNode状态,例如部分blocksMap
main方法中有个创建NameNode方法,进入
NameNode方法中new了一个NameNode对象
进入初始化方法
启动Http服务
一直往下走,找到启动的服务端口号是9870,这个端口号开启后,web页面hadoop102:9870才能打开
http服务开始启动
HttpServer2对象绑定了一些页面的相关操作和组件
进入加载镜像文件的方法