基于Zookeeper实现简易的负载均衡
完整代码在这里基于Zookeeper实现简易的负载均衡
以下是讲解
一、 要求
1. 编程题一:
在基于Netty的自定义RPC的案例基础上,进行改造。基于Zookeeper实现简易版服务的注册与发现机制。
要求完成改造版本:
- 启动2个服务端,可以将IP及端口信息自动注册到Zookeeper
- 客户端启动时,从Zookeeper中获取所有服务提供端节点信息,客户端与每一个服务端都建立连接
- 某个服务端下线后,Zookeeper注册列表会自动剔除下线的服务端节点,客户端与下线的服务端断开连接
- 服务端重新上线,客户端能感知到,并且与重新上线的服务端重新建立连接
1. 编程题二:
在“编程题一”的基础上,实现基于Zookeeper的简易版负载均衡策略
要求完成改造版本:
Zookeeper记录每个服务端的最后一次响应时间,有效时间为5秒,5s内如果该服务端没有新的请求,响应时间清零或失效。
当客户端发起调用,每次都选择最后一次响应时间短的服务端进行服务调用,如果时间一致,随机选取一个服务端进行调用,从而实现负载均衡
二、要求分析及实现思路
1. Zookeeper实现简易版服务的注册与发现机制
1.1 分析
- 利用Zookeeper临时节点的特性被创建的客户端的会话失效后,Zookeeper就会将这个临时节点删除掉
- 还有Zookeeper的watcher特性我们可以将对感兴趣的节点所发生的事件进行监听,这样当被关注的节点必生事件后,就会通知给客户端。
1.2 实现思路
- RPC服务端以ip:port作为Zookeeper节点的名称进行注册。即完整的的节点为/rpc/server/ip:port。
- RPC客户端则监听/rpc/server/子节点的添加即删除事件。
- 采用Zookeeper客户端框架apache.curator,比原生的Zookeeper客户端方便很多
2. 实现基于Zookeeper的简易版负载均衡策略
2.1 分析
在每个服务的节点下都记录每一次请求信息,格式为:最后一次请求时间-耗时,
初始值为当前时间-0,RPC客户端每次请求时都更新一下相应的服务节点的值。长时间没有被调用过则置为初始值。
2.2 实现思路
将RPC服务端创建服务节点时,将初始服务的调用信息,再使用调度线程池每隔5秒判断一下最后一次请求是否超过5秒,如果超过则初始调用信息。
三、具体实现
1. 声明Zookeeper客户端
在服务端和客户端都添加如下配置
@Configuration
public class ZookeeperConfig {
public static final String SERVER_PATH = "/server";
@Value("${zookeeper.server}")
private String server;
@Bean
public CuratorFramework curatorFramework() {
RetryPolicy exponentialBackoffRetry = new ExponentialBackoffRetry(1000, 3);
CuratorFramework curatorFramework = CuratorFrameworkFactory.builder()
.connectString(server)
.sessionTimeoutMs(1000)
.connectionTimeoutMs(3000)
.retryPolicy(exponentialBackoffRetry)
.namespace("rpc")
.build();
curatorFramework.start();
return curatorFramework;
}
}
2. 服务的注册
开户服务后注册服务及初始调用信息,在RpcServer.startServer方法内再调用RpcServer.createServerNodeOnZookeeper的方法
public void startServer(String ip, int port) {
try {
this.port = port;
this.bossGroup = new NioEventLoopGroup(1);
this.workerGroup = new NioEventLoopGroup();
ServerBootstrap serverBootstrap = new ServerBootstrap();
serverBootstrap.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
@Override
protected void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
pipeline.addLast(new StringDecoder());
pipeline.addLast(new StringEncoder());
// 业务处理类
pipeline.addLast(rpcServerHandler);
}
});
ChannelFuture sync = serverBootstrap.bind(ip, port).sync();
System.out.println("=============服务端启动成功============");
createServerNodeOnZookeeper(ip, port);
sync.channel().closeFuture().sync();
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
if (this.bossGroup != null) {
this.bossGroup.shutdownGracefully();
}
if (this.workerGroup != null) {
this.workerGroup.shutdownGracefully();
}
}
}
private void createServerNodeOnZookeeper(String ip, int port){
// 创建此服务的临时节点
String path = null;
try {
path = ZookeeperConfig.SERVER_PATH + "/" + ip + ":" + port;
String data = System.currentTimeMillis()+"-0";
zkCurator.create()
.creatingParentContainersIfNeeded()
.withMode(CreateMode.EPHEMERAL)
.forPath(path, data.getBytes(StandardCharsets.UTF_8));
} catch (Exception e) {
log.error("创建服务节点异常, path={}", path, e);
}
}
3. 客户端发现服务节点
即客户端对服务节点的监听及对服务节点的请求信息更新
@Slf4j
@SpringBootApplication
public class ClientBootStrapApplication implements CommandLineRunner {
private final ScheduledExecutorService scheduledExecutor = Executors.newScheduledThreadPool(3);
@Autowired
private CuratorFramework zkCurator;
@Autowired
private RpcClientSelector rpcClientSelector;
@Override
public void run(String... args) throws Exception {
TreeCache treeCache = new TreeCache(zkCurator, ZookeeperConfig.SERVER_PATH);
// 添加监听器,监听server的改变
treeCache.getListenable().addListener(new ServerListener(rpcClientSelector));
treeCache.start();
// 任务调度,每隔5秒触发一次,将超过5秒没有发生请求的服务节点的请求信息置为初始
ScheduledTask scheduledTask = new ScheduledTask(zkCurator, rpcClientSelector);
scheduledExecutor.scheduleWithFixedDelay(scheduledTask, 5, 5, TimeUnit.SECONDS);
}
public static void main(String[] args) {
SpringApplication.run(ClientBootStrapApplication.class, args);
}
}
ServerListener是对服务节点的监听
@Slf4j
public class ServerListener implements TreeCacheListener {
private final RpcClientSelector rpcClientSelector;
public ServerListener(RpcClientSelector rpcClientSelector) {
this.rpcClientSelector = rpcClientSelector;
}
@Override
public void childEvent(CuratorFramework client, TreeCacheEvent event) throws Exception {
ChildData childData = event.getData();
if (childData == null) {
return;
}
String childDataPath = childData.getPath();
String serverPath = childDataPath.substring(childDataPath.lastIndexOf("/") + 1);
TreeCacheEvent.Type eventType = event.getType();
String[] serverArr = serverPath.split(":");
if (serverArr.length != 2) {
return;
}
if (TreeCacheEvent.Type.NODE_ADDED.equals(eventType)) {
log.info("有新的服务加入{}", serverPath);
RpcClient rpcClient = new RpcClient(serverArr[0], Integer.parseInt(serverArr[1]));
rpcClientSelector.putIfAbsent(serverPath, rpcClient);
log.info("现有客户端实例{}", rpcClientSelector.getServers());
} else if (TreeCacheEvent.Type.NODE_REMOVED.equals(eventType)) {
log.info("有服务被移除{}", serverPath);
rpcClientSelector.remove(serverPath);
log.info("现有客户端实例{}", rpcClientSelector.getServers());
}
}
}
ScheduledTask是对超过5秒未发生请求的服务节点的请求信息置为初始
@Slf4j
public class ScheduledTask implements Runnable{
private final CuratorFramework zkCurator;
private final RpcClientSelector rpcClientSelector;
public ScheduledTask(CuratorFramework zkCurator, RpcClientSelector rpcClientSelector) {
this.zkCurator = zkCurator;
this.rpcClientSelector = rpcClientSelector;
}
@Override
public void run() {
try {
List<String> servers = zkCurator.getChildren().forPath(ZookeeperConfig.SERVER_PATH);
if(CollectionUtils.isEmpty(servers)){
return;
}
for(String server : servers){
String severPath = ZookeeperConfig.SERVER_PATH + "/" + server;
byte[] bytes = zkCurator.getData().forPath(severPath);
String data = new String(bytes);
String[] split = data.split("-");
long lastEndTime = Long.parseLong(split[0]);
if(System.currentTimeMillis() - 5000 > lastEndTime){
String setData = lastEndTime + "-0";
zkCurator.setData().forPath(severPath, setData.getBytes(StandardCharsets.UTF_8));
rpcClientSelector.resetServerLastTime(server, lastEndTime, 0);
log.info("服务{}长时间未请求,最后一次请求耗时置为{}", server, setData);
}
}
} catch (Exception e) {
log.error("更新服务信息异常", e);
}
}
}
客户端发起请求并对请求的信息的更新
com.mh.rpc.consumer.proxy.RpcClientProxy#createProxy方法里将给com.mh.rpc.consumer.loadbalance.RpcClientSelector实现负载均衡
com.mh.rpc.consumer.proxy.RpcClientProxy#resetTime方法将客户端请求时间及响应时长推到
Zookeeper的相应服务节点
@Slf4j
@Component
public class RpcClientProxy {
private int callTimes = 0;
@Autowired
private CuratorFramework zkCurator;
@Autowired
private RpcClientSelector rpcClientSelector;
public Object createProxy(Class<?> serviceClass) {
return Proxy.newProxyInstance(Thread.currentThread().getContextClassLoader(),
new Class[]{serviceClass},
(proxy, method, args) -> {
RpcRequest rpcRequest = new RpcRequest();
rpcRequest.setRequestId(UUID.randomUUID().toString());
rpcRequest.setClassName(method.getDeclaringClass().getName());
rpcRequest.setMethodName(method.getName());
rpcRequest.setParameters(args);
rpcRequest.setParameterTypes(method.getParameterTypes());
RpcClient rpcClient = null;
// 访问次数
callTimes += 1;
try {
long startTime = System.currentTimeMillis();
rpcClient = rpcClientSelector.determineClient();
log.info("第{}次访问,请求的服务是{}", callTimes, rpcClient);
Object responseMsg = rpcClient.send(JSON.toJSONString(rpcRequest));
// 这里随机睡一下,方便验证负载均衡
Thread.sleep((long) (Math.random() * 100));
RpcResponse rpcResponse = JSON.parseObject(responseMsg.toString(), RpcResponse.class);
if (rpcResponse.getError() != null) {
throw new RuntimeException(rpcResponse.getError());
}
Object result = rpcResponse.getResult();
long endTime = System.currentTimeMillis();
long spendTime = endTime - startTime;
// 重置服务请求信息
resetTime(rpcClient.getServerInfo().getServerStr(), endTime, spendTime);
return JSON.parseObject(result.toString(), method.getReturnType());
} catch (Exception e) {
log.error("第{}次访问,请求的服务是{},异常", callTimes, rpcClient, e);
}
return null;
}
);
}
private void resetTime(String serverPath, long endTime, long spendTime) throws Exception {
String data = endTime + "-" + spendTime;
String path = ZookeeperConfig.SERVER_PATH + "/" + serverPath;
rpcClientSelector.resetServerLastTime(serverPath, endTime, spendTime);
zkCurator.setData().forPath(path, data.getBytes(StandardCharsets.UTF_8));
log.info("第{}次访问,将服务{}请求信息置为{}", callTimes, path, data);
}
}
RPC客户端选取器
com.mh.rpc.consumer.loadbalance.RpcClientSelector由此类实现负载均衡
并存放所有的PRC客户端,RpcClientSelector.determineClient方法决定取选哪个客户端
private final Map<String, RpcClient> rpcClientMap = new HashMap<>();
private final List<String> servers = new ArrayList<>();
public void putIfAbsent(String server, RpcClient rpcClient) {
if (!rpcClientMap.containsKey(server)) {
rpcClientMap.putIfAbsent(server, rpcClient);
servers.add(server);
}
}
public void remove(String server) {
if (rpcClientMap.containsKey(server)) {
rpcClientMap.remove(server).close();
servers.remove(server);
}
}
public Set<String> getServers() {
return rpcClientMap.keySet();
}
public void resetServerLastTime(String server, long lastEndTime, long lastSpendTime) {
ServerInfo serverInfo = rpcClientMap.get(server).getServerInfo();
serverInfo.setLastEndTime(lastEndTime);
serverInfo.setLastSpendTime(lastSpendTime);
}
public RpcClient determineClient() {
RpcClient winner = null;
Collection<RpcClient> rpcClients = rpcClientMap.values();
for (RpcClient rpcClient : rpcClients) {
if (winner == null) {
winner = rpcClient;
}
long curTime = rpcClient.getServerInfo().getLastSpendTime();
long winnerTime = winner.getServerInfo().getLastSpendTime();
if (winnerTime > curTime) {
winner = rpcClient;
}
}
return winner;
}
四、结果验证
通过日志分析结果
每次访问都是这个地址:http://localhost:8080/user/get/1
2022-01-09 14:47:54.654 INFO 9380 --- [tor-TreeCache-0] c.mh.rpc.consumer.config.ServerListener : 有新的服务加入127.0.0.1:8898
2022-01-09 14:47:56.688 INFO 9380 --- [tor-TreeCache-0] c.mh.rpc.consumer.config.ServerListener : 现有客户端实例[127.0.0.1:8898]
2022-01-09 14:47:56.688 INFO 9380 --- [tor-TreeCache-0] c.mh.rpc.consumer.config.ServerListener : 有新的服务加入127.0.0.1:8899
2022-01-09 14:47:56.697 INFO 9380 --- [tor-TreeCache-0] c.mh.rpc.consumer.config.ServerListener : 现有客户端实例[127.0.0.1:8898, 127.0.0.1:8899]
2022-01-09 14:47:58.728 INFO 9380 --- [nio-8080-exec-1] c.mh.rpc.consumer.proxy.RpcClientProxy : 第1次访问,请求的服务是127.0.0.1:8898
2022-01-09 14:47:59.275 INFO 9380 --- [nio-8080-exec-1] c.mh.rpc.consumer.proxy.RpcClientProxy : 第1次访问,将服务/server/127.0.0.1:8898请求信息置为1641710878854-126
2022-01-09 14:47:59.702 INFO 9380 --- [pool-3-thread-1] com.mh.rpc.consumer.task.ScheduledTask : 服务127.0.0.1:8899长时间未请求,最后一次请求耗时置为1641710720466-0
2022-01-09 14:48:00.268 INFO 9380 --- [nio-8080-exec-3] c.mh.rpc.consumer.proxy.RpcClientProxy : 第2次访问,请求的服务是127.0.0.1:8899
2022-01-09 14:48:00.417 INFO 9380 --- [nio-8080-exec-3] c.mh.rpc.consumer.proxy.RpcClientProxy : 第2次访问,将服务/server/127.0.0.1:8899请求信息置为1641710880381-113
2022-01-09 14:48:01.399 INFO 9380 --- [nio-8080-exec-2] c.mh.rpc.consumer.proxy.RpcClientProxy : 第3次访问,请求的服务是127.0.0.1:8899
2022-01-09 14:48:01.461 INFO 9380 --- [nio-8080-exec-2] c.mh.rpc.consumer.proxy.RpcClientProxy : 第3次访问,将服务/server/127.0.0.1:8899请求信息置为1641710881424-25
2022-01-09 14:48:02.674 INFO 9380 --- [nio-8080-exec-4] c.mh.rpc.consumer.proxy.RpcClientProxy : 第4次访问,请求的服务是127.0.0.1:8899
2022-01-09 14:48:02.862 INFO 9380 --- [nio-8080-exec-4] c.mh.rpc.consumer.proxy.RpcClientProxy : 第4次访问,将服务/server/127.0.0.1:8899请求信息置为1641710882740-66
2022-01-09 14:48:04.837 INFO 9380 --- [pool-3-thread-1] com.mh.rpc.consumer.task.ScheduledTask : 服务127.0.0.1:8898长时间未请求,最后一次请求耗时置为1641710878854-0
2022-01-09 14:48:06.197 INFO 9380 --- [nio-8080-exec-5] c.mh.rpc.consumer.proxy.RpcClientProxy : 第5次访问,请求的服务是127.0.0.1:8898
2022-01-09 14:48:06.349 INFO 9380 --- [nio-8080-exec-5] c.mh.rpc.consumer.proxy.RpcClientProxy : 第5次访问,将服务/server/127.0.0.1:8898请求信息置为1641710886216-19
2022-01-09 14:48:10.189 INFO 9380 --- [pool-3-thread-2] com.mh.rpc.consumer.task.ScheduledTask : 服务127.0.0.1:8899长时间未请求,最后一次请求耗时置为1641710882740-0
2022-01-09 14:48:12.082 INFO 9380 --- [nio-8080-exec-6] c.mh.rpc.consumer.proxy.RpcClientProxy : 第6次访问,请求的服务是127.0.0.1:8899
2022-01-09 14:48:12.289 INFO 9380 --- [nio-8080-exec-6] c.mh.rpc.consumer.proxy.RpcClientProxy : 第6次访问,将服务/server/127.0.0.1:8899请求信息置为1641710892178-96
2022-01-09 14:48:15.324 INFO 9380 --- [pool-3-thread-2] com.mh.rpc.consumer.task.ScheduledTask : 服务127.0.0.1:8898长时间未请求,最后一次请求耗时置为1641710886216-0
2022-01-09 14:48:20.391 INFO 9380 --- [pool-3-thread-2] com.mh.rpc.consumer.task.ScheduledTask : 服务127.0.0.1:8898长时间未请求,最后一次请求耗时置为1641710886216-0
2022-01-09 14:48:20.431 INFO 9380 --- [pool-3-thread-2] com.mh.rpc.consumer.task.ScheduledTask : 服务127.0.0.1:8899长时间未请求,最后一次请求耗时置为1641710892178-0
2022-01-09 14:48:25.557 INFO 9380 --- [pool-3-thread-2] com.mh.rpc.consumer.task.ScheduledTask : 服务127.0.0.1:8898长时间未请求,最后一次请求耗时置为1641710886216-0
2022-01-09 14:48:25.597 INFO 9380 --- [pool-3-thread-2] com.mh.rpc.consumer.task.ScheduledTask : 服务127.0.0.1:8899长时间未请求,最后一次请求耗时置为1641710892178-0
2022-01-09 14:48:30.136 INFO 9380 --- [tor-TreeCache-0] c.mh.rpc.consumer.config.ServerListener : 有服务被移除127.0.0.1:8898
2022-01-09 14:48:30.138 INFO 9380 --- [tor-TreeCache-0] c.mh.rpc.consumer.config.ServerListener : 现有客户端实例[127.0.0.1:8899]
2022-01-09 14:48:30.654 INFO 9380 --- [pool-3-thread-2] com.mh.rpc.consumer.task.ScheduledTask : 服务127.0.0.1:8899长时间未请求,最后一次请求耗时置为1641710892178-0
2022-01-09 14:48:34.865 INFO 9380 --- [nio-8080-exec-8] c.mh.rpc.consumer.proxy.RpcClientProxy : 第7次访问,请求的服务是127.0.0.1:8899
2022-01-09 14:48:35.019 INFO 9380 --- [nio-8080-exec-8] c.mh.rpc.consumer.proxy.RpcClientProxy : 第7次访问,将服务/server/127.0.0.1:8899请求信息置为1641710914904-39
2022-01-09 14:48:36.514 INFO 9380 --- [nio-8080-exec-7] c.mh.rpc.consumer.proxy.RpcClientProxy : 第8次访问,请求的服务是127.0.0.1:8899
2022-01-09 14:48:36.605 INFO 9380 --- [nio-8080-exec-7] c.mh.rpc.consumer.proxy.RpcClientProxy : 第8次访问,将服务/server/127.0.0.1:8899请求信息置为1641710916571-57
2022-01-09 14:48:38.520 INFO 9380 --- [nio-8080-exec-9] c.mh.rpc.consumer.proxy.RpcClientProxy : 第9次访问,请求的服务是127.0.0.1:8899
2022-01-09 14:48:38.725 INFO 9380 --- [nio-8080-exec-9] c.mh.rpc.consumer.proxy.RpcClientProxy : 第9次访问,将服务/server/127.0.0.1:8899请求信息置为1641710918610-90
2022-01-09 14:48:46.101 INFO 9380 --- [pool-3-thread-2] com.mh.rpc.consumer.task.ScheduledTask : 服务127.0.0.1:8899长时间未请求,最后一次请求耗时置为1641710918610-0
2022-01-09 14:48:48.898 INFO 9380 --- [tor-TreeCache-0] c.mh.rpc.consumer.config.ServerListener : 有新的服务加入127.0.0.1:8898
2022-01-09 14:48:48.906 INFO 9380 --- [tor-TreeCache-0] c.mh.rpc.consumer.config.ServerListener : 现有客户端实例[127.0.0.1:8898, 127.0.0.1:8899]
2022-01-09 14:48:51.433 INFO 9380 --- [pool-3-thread-2] com.mh.rpc.consumer.task.ScheduledTask : 服务127.0.0.1:8899长时间未请求,最后一次请求耗时置为1641710918610-0
2022-01-09 14:48:52.609 INFO 9380 --- [io-8080-exec-10] c.mh.rpc.consumer.proxy.RpcClientProxy : 第10次访问,请求的服务是127.0.0.1:8898
2022-01-09 14:48:52.868 INFO 9380 --- [io-8080-exec-10] c.mh.rpc.consumer.proxy.RpcClientProxy : 第10次访问,将服务/server/127.0.0.1:8898请求信息置为1641710932766-157
2022-01-09 14:48:54.016 INFO 9380 --- [nio-8080-exec-1] c.mh.rpc.consumer.proxy.RpcClientProxy : 第11次访问,请求的服务是127.0.0.1:8899
2022-01-09 14:48:54.169 INFO 9380 --- [nio-8080-exec-1] c.mh.rpc.consumer.proxy.RpcClientProxy : 第11次访问,将服务/server/127.0.0.1:8899请求信息置为1641710934060-44
2022-01-09 14:48:55.621 INFO 9380 --- [nio-8080-exec-3] c.mh.rpc.consumer.proxy.RpcClientProxy : 第12次访问,请求的服务是127.0.0.1:8899
2022-01-09 14:48:56.012 INFO 9380 --- [nio-8080-exec-3] c.mh.rpc.consumer.proxy.RpcClientProxy : 第12次访问,将服务/server/127.0.0.1:8899请求信息置为1641710935693-72