手写RPC

手写RPC

1.RPC流程

1.1 RPC设计流程

在这里插入图片描述

1.2 底层调用流程

在这里插入图片描述

2. RPC工程实现

2.1 工程设计

在这里插入图片描述
模块说明:
RPC-CLIENT(客户端工程): 主要包括服务订阅、订单服务接口的调用,以及负载均衡功能。
RPC-COMMON(公用组件模块):主要包括RPC组件、序列化等公用组件。
RPC-INTERFACE-API (RPC接口模块) :主要负责RPC接口的定义, 这里提供了订单服务接口用于
测试。
RPC-SERVER (RPC服务端) : 包括服务注册、编码解码、负责对订单服务接口的实现。

2.2 工程结构

  1. 父级工程
    职责:
    根级工程, 统一管理所有工程的依赖配置。
    POM依赖:
<dependencyManagement> 
<dependencies> 
<!-- zookeeper客户端组件依赖 --><dependency> 
<groupId>com.github.sgroschupf</groupId> 
<artifactId>zkclient</artifactId> 
<version>0.1</version> 
</dependency> 
<!-- Netty 组件依赖 --> 
<dependency> 
<groupId>io.netty</groupId> 
<artifactId>netty-all</artifactId> 
<version>4.1.42.Final</version> 
</dependency> 
<!-- 实例化组件依赖 --> 
<dependency> 
<groupId>org.objenesis</groupId> 
<artifactId>objenesis</artifactId> 
<version>2.6</version> 
</dependency> 
<!-- protostuff 核心依赖 --> 
<dependency> 
<groupId>com.dyuproject.protostuff</groupId> 
<artifactId>protostuff-core</artifactId> 
<version>1.0.8</version> 
</dependency> 
<!-- protostuff 运行时依赖 --> 
<dependency> 
<groupId>com.dyuproject.protostuff</groupId> 
<artifactId>protostuff-runtime</artifactId> 
<version>1.0.8</version> 
</dependency> 
<!-- spring 上下文组件依赖 --> 
<dependency> 
<groupId>org.springframework</groupId> 
<artifactId>spring-context</artifactId> 
<version>${spring.version}</version> 
</dependency> 
<!-- lombok 代码简化依赖 --> 
<dependency> 
<groupId>org.projectlombok</groupId> 
<artifactId>lombok</artifactId> 
<version>1.16.22</version> 
</dependency> 
<!-- 日志组件依赖 --> 
<dependency> 
<groupId>org.slf4j</groupId> 
<artifactId>slf4j-log4j12</artifactId> 
<version>1.7.25</version> 
</dependency> 
<!-- Google Guava 核心扩展库--> 
<dependency> 
<groupId>com.google.guava</groupId> 
<artifactId>guava</artifactId> 
<version>19.0</version> 
</dependency> 
<!-- Apache 集合 扩展依赖 --> 
<dependency> 
<groupId>commons-collections</groupId> 
<artifactId>commons-collections</artifactId> 
<version>3.2.2</version></dependency> 
<!-- Apache lang 包扩展依赖 --> 
<dependency> 
<groupId>org.apache.commons</groupId> 
<artifactId>commons-lang3</artifactId> 
<version>3.6</version> 
</dependency> 
<!-- Apache BeanUtils 辅助工具依赖 --> 
<dependency> 
<groupId>commons-beanutils</groupId> 
<artifactId>commons-beanutils</artifactId> 
<version>1.9.3</version> 
</dependency> 
<!-- cglib动态代理依赖--> 
<dependency> 
<groupId>cglib</groupId> 
<artifactId>cglib</artifactId> 
<version>3.1</version> 
</dependency> 
<!-- Java元数据分析反射依赖--> 
<dependency> 
<groupId>org.reflections</groupId> 
<artifactId>reflections</artifactId> 
<version>0.9.10</version> 
</dependency> 
</dependencies> 
</dependencyManagement> 

  1. 公用组件工程
    在这里插入图片描述
    职责:
    主要封装公用的组件信息,RPC通用注解, 序列化组件等, 便于各工程模块的调用。
    POM依赖:
<dependencies> 
<!-- spring 上下文组件依赖 --> 
<dependency> 
<groupId>org.springframework</groupId> 
<artifactId>spring-context</artifactId> 
<version>${spring.version}</version></dependency> 
<!-- 实例化组件依赖 --> 
<dependency> 
<groupId>org.objenesis</groupId> 
<artifactId>objenesis</artifactId> 
</dependency> 
<!-- protostuff 核心依赖 --> 
<dependency> 
<groupId>com.dyuproject.protostuff</groupId> 
<artifactId>protostuff-core</artifactId> 
</dependency> 
<!-- protostuff 运行时依赖 --> 
<dependency> 
<groupId>com.dyuproject.protostuff</groupId> 
<artifactId>protostuff-runtime</artifactId> 
</dependency> 
<!-- Apache 集合 扩展依赖 --> 
<dependency> 
<groupId>commons-collections</groupId> 
<artifactId>commons-collections</artifactId> 
</dependency> 
<!-- Apache lang 包扩展依赖 --> 
<dependency> 
<groupId>org.apache.commons</groupId> 
<artifactId>commons-lang3</artifactId> 
</dependency> 
<!-- Apache BeanUtils 辅助工具依赖 --> 
<dependency> 
<groupId>commons-beanutils</groupId> 
<artifactId>commons-beanutils</artifactId> 
</dependency> 
<!-- Google Guava 核心扩展库--> 
<dependency> 
<groupId>com.google.guava</groupId> 
<artifactId>guava</artifactId> 
</dependency> 
<!-- 日志组件依赖 --> 
<dependency> 
<groupId>org.slf4j</groupId> 
<artifactId>slf4j-log4j12</artifactId> 
</dependency> 
</dependencies> 

  1. 公用RPC接口工程
    在这里插入图片描述
    职责:
    负责RPC接口的定义, 由服务端去做具体实现, 客户端引入接口, 根据注册信息去调用对应服
    务。
    POM依赖
<dependencies> 
<!-- 公用模块依赖 --> 
<dependency> 
<groupId>com.itcast.rpc</groupId> 
<artifactId>rpc-common</artifactId> 
<version>${project.version}</version> 
</dependency> 
</dependencies> 

  1. 客户端工程
    在这里插入图片描述
    职责:
    负责RPC客户端的实现,包含服务订阅、动态代理、基于Netty的同步调用等。
    POM依赖:
<dependencies> 
<!-- RPC接口模块依赖 --> 
<dependency> 
<groupId>com.itcast.rpc</groupId> 
<artifactId>rpc-interface-api</artifactId> 
<version>${project.version}</version> 
</dependency> 
<!-- RPC公用组件依赖 --> 
<dependency> 
<groupId>com.itcast.rpc</groupId> 
<artifactId>rpc-common</artifactId> 
<version>${project.version}</version> 
</dependency> 
<!-- Netty 组件依赖 --> 
<dependency> 
<groupId>io.netty</groupId> 
<artifactId>netty-all</artifactId> 
</dependency> 
<!-- zookeeper客户端组件依赖 --> 
<dependency> 
<groupId>com.github.sgroschupf</groupId> 
<artifactId>zkclient</artifactId> 
</dependency> 
<!-- cglib动态代理依赖--> 
<dependency> 
<groupId>cglib</groupId> 
<artifactId>cglib</artifactId> 
</dependency> 
<!-- Java元数据分析反射依赖--> 
<dependency> 
<groupId>org.reflections</groupId> 
<artifactId>reflections</artifactId> 
</dependency> 
<!-- Spring Boot 组件依赖 --> 
<dependency> 
<groupId>org.springframework.boot</groupId> 
<artifactId>spring-boot-starter-web</artifactId> 
<version>${spring.boot.version}</version> 
</dependency> 
<!-- 日志组件依赖 --> 
<dependency> 
<groupId>org.slf4j</groupId> 
<artifactId>slf4j-log4j12</artifactId> 
</dependency> 
</dependencies> 

  1. 服务端工程

在这里插入图片描述
职责:
负责RPC服务端的实现, 包含服务的注册、RPC接口的实现、基于Netty的RPC服务端通讯等。
POM依赖:

<dependencies> 
<!-- 公用RPC接口模块依赖 --> 
<dependency> 
<groupId>com.itcast.rpc</groupId> 
<artifactId>rpc-interface-api</artifactId> 
<version>${project.version}</version> 
</dependency> 
<!-- 公用封装组件依赖 --> 
<dependency> 
<groupId>com.itcast.rpc</groupId> 
<artifactId>rpc-common</artifactId> 
<version>${project.version}</version> 
</dependency> 
<!-- Netty 通讯依赖--> 
<dependency> 
<groupId>io.netty</groupId> 
<artifactId>netty-all</artifactId> 
</dependency> 
<!-- zookeeper客户端依赖 --> 
<dependency> 
<groupId>com.github.sgroschupf</groupId> 
<artifactId>zkclient</artifactId> 
</dependency> 
<!-- spring boot 依赖 --> 
<dependency> 
<groupId>org.springframework.boot</groupId> 
<artifactId>spring-boot-starter-web</artifactId> 
<version>${spring.boot.version}</version> 
</dependency> 
<!-- 日志组件依赖 --> 
<dependency> 
<groupId>org.slf4j</groupId> 
<artifactId>slf4j-log4j12</artifactId>
</dependency> 
</dependencies>

2.3 RPC之公用组件实现

2.3.1 RPC接口注解

在这里插入图片描述
RPC客户端接口注解(@RpcClient):
用于标识RPC接口,动态代理扫描时, 需要依据此注解, 创建代理类。
RPC服务端接口注解(@RpcService):
用于声明服务端RPC接口实现, 并且在服务注册时, 会根据注解扫描所有对应接口, 将信息注册
至服务中心

2.3.2 接口交互信息封装

在这里插入图片描述
请求参数封装对象(RpcRequest):
对请求参数信息做统一封装, 便于接口的扩展调用处理, 包括请求ID,请求参数, 请求接口等信息。
返回信息封装对象(RpcResponse):
对返回信息做统一封装, 在序列化、请求回调与动态代理时做统一处理, 不用再去根据每个接口不同的返回信息做适配。

2.3.3. protostuff序列化实现

谷歌的protobuf, 需要写proto文件, 使用工具来编译生成Java文件会比较复杂;protostuff也是Google推出的, 它是基于protobuf做的封装, 可以很好地解决这个问题, 不需要再去编写proto脚本文件, 能够自动扫描对象信息, 并实现序列化。

  • protostuff序列化工具实现:


import com.dyuproject.protostuff.LinkedBuffer;
import com.dyuproject.protostuff.ProtobufIOUtil;
import com.dyuproject.protostuff.Schema;
import com.dyuproject.protostuff.runtime.RuntimeSchema;
import org.objenesis.Objenesis;
import org.objenesis.ObjenesisStd;

import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;

public class ProtoSerializerUtil {
    /**
     * 序列化对象信息缓存
     */
    private static Map<Class<?>, Schema<?>> classSchemaMap = new
            ConcurrentHashMap<>();
    /**
     * 负责实例化对象, 支持缓存
     */
    private static Objenesis objenesis = new ObjenesisStd(true);

    /**
     * 序列化对象接口
     *
     * @param t
     * @param <T>
     * @return
     */
    public static <T> byte[] serialize(T t) {
        Class<T> cls = (Class<T>) t.getClass();
        LinkedBuffer buffer =
                LinkedBuffer.allocate(LinkedBuffer.DEFAULT_BUFFER_SIZE);
        try {
            Schema<T> schema = getClassSchema(cls);
            return ProtobufIOUtil.toByteArray(t, schema, buffer);
        } catch (Exception e) {
            throw new IllegalStateException(e.getMessage(), e);
        } finally {
            buffer.clear();
        }
    }

    /**
     * 反序列化对象接口
     *
     * @param bytes
     * @param cls
     * @param <T>
     * @return
     */
    public static <T> T deserialize(byte[] bytes, Class<T> cls) {
        try {
            Schema<T> schema = getClassSchema(cls);
            T message = objenesis.newInstance(cls);
            ProtobufIOUtil.mergeFrom(bytes, message, schema);
            return message;
        } catch (Exception e) {
            throw new IllegalStateException(e.getMessage(), e);
        }
    }

    /**
     * 获取序列化对象的信息
     *
     * @param cls
     * @param <T>
     * @return
     */
    private static <T> Schema<T> getClassSchema(Class<T> cls) {
        Schema<T> classSchema = null;
        if (classSchemaMap.containsKey(cls)) {
            classSchema = (Schema<T>) classSchemaMap.get(cls);
        } else {
            classSchema = RuntimeSchema.getSchema(cls);
            if (classSchema != null) {
                classSchemaMap.put(cls, classSchema);
            }
        }
        return classSchema;
    }
}

  • 开辟用户空间缓存,LinkedBuffer:

申请用户空间缓存,默认空间大小DEFAULT_BUFFER_SIZE为512,用于提升序列化处理性能。
LinkedBuffer buffer = LinkedBuffer.allocate(LinkedBuffer.DEFAULT_BUFFER_SIZE);

  • 序列化对象信息缓存, classSchemaMap:

缓存对象序列化的类信息,主要是类的结构化信息, 比如属性、方法、权限信息等。 通过protostuff-runtime组件扫描获取。
private static Map<Class, Schema> classSchemaMap = newConcurrentHashMap<>();

  • 序列化接口, serialize:

分配用户缓存空间, 获取类结构信息, 然后调用protostuff组件实现对象的序列化。

  • 反序列化接口,deserialize:

先获取类结构概要信息, 然后通过objenesis组件创建序列化对象,最后通过mergeFrom进行反序列化。

  • protoStuff底层序列化实现机制
    序列化接口ProtobufIOUtil.toByteArray:
public static<T> byte[]toByteArray(T message,Schema<T> schema,
        LinkedBuffer buffer)
        {
// 1. 判断缓存是否完全占用 
        if(buffer.start!=buffer.offset)
        throw new IllegalArgumentException("Buffer previously used and 
        had not been reset."); 
// 2. 创建Protobuf输出对象final ProtobufOutput output = new ProtobufOutput(buffer); 
        try
        {
// 3. 通过Schema对象实现序列化 
        schema.writeTo(output,message);
        }
        catch(IOException e)
        {
        throw new RuntimeException("Serializing to a byte array threw an 
        IOException" + 
        "(should never happen).",e);
        }
        return output.toByteArray();
        } 

schema.writeTo的内部调用:
在这里插入图片描述
实质调用的MappedSchema的writeTo方法, 它会遍历所有属性, 逐一进行序列化处理。
在这里插入图片描述
继续查看writeString方法实现

public void writeString(int fieldNumber,String value,boolean repeated)
        throws IOException
        {
// 采用UTF-8编码转为字节数组 
        tail=writeUTF8VarDelimited(
        value,
        this,
        writeRawVarInt32(makeTag(fieldNumber,
        WIRETYPE_LENGTH_DELIMITED),this,tail));
        } 

  • protoStuff反序列化实现机制

反序列化接口ProtobufIOUtil.mergeFrom:

public static<T> void mergeFrom(byte[]data,T message,Schema<T> schema)
        {
// 指定字节数组的偏移量和长度, 实现反序列化 
        IOUtil.mergeFrom(data,0,data.length,message,schema,false);
        }

IOUtil.mergeFrom接口实现

static<T> void mergeFrom(byte[]data,int offset,int length,T message,
        Schema<T> schema,boolean decodeNestedMessageAsGroup)
        {
        try
        {
// 创建字节数组记录对象 
final ByteArrayInput input=new ByteArrayInput(data,offset,
        length,
        decodeNestedMessageAsGroup);
// 调用schema接口,实现反序列化 
        schema.mergeFrom(input,message);
// 校验完整性 
        input.checkLastTagWas(0);
        }
        ... 

调用MappedSchema的mergeFrom接口:逐一对Field进行反序列化处理。
在这里插入图片描述

2.3.4. 其他公用封装
  • IP封装工具
    用于服务注册时, 提供服务端的IP信息。
    IpUtil工具:

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.net.InetAddress;
import java.net.NetworkInterface;
import java.net.SocketException;
import java.net.UnknownHostException;
import java.util.Enumeration;

public class IpUtil {

    private static final Logger LOGGER = LoggerFactory.getLogger(IpUtil.class);

    public static String getHostAddress() {
        String host = null;
        try {
            host = InetAddress.getLocalHost().getHostAddress();
            return host;
        } catch (UnknownHostException e) {
            LOGGER.error("Cannot get server host.", e);
        }
        return host;
    }

    /**
     * 获取实际的IP
     * @return
     */
    public static String getRealIp()  {
        String localIp = null;
        String netIp = null;
        
        try {
        // 获取当前主机所有网卡信息 
            Enumeration<NetworkInterface> networkInterfaces = NetworkInterface.getNetworkInterfaces();
            boolean finded = false;
            InetAddress ip = null;
            // 遍历所有网卡信息
            while (networkInterfaces.hasMoreElements() && !finded) {
                NetworkInterface networkInterface = networkInterfaces.nextElement();
                Enumeration<InetAddress> addresses = networkInterface.getInetAddresses();
                // 遍历IP信息
                while (addresses.hasMoreElements()) {
                    ip = addresses.nextElement();
                    if (!ip.isSiteLocalAddress() && !ip.isLoopbackAddress() && ip.getHostAddress().indexOf(":") == -1) {                          // 获取真实IP
                        netIp = ip.getHostAddress();
                        finded = true;
                        break;
                    } else if (ip.isSiteLocalAddress() && !ip.isLoopbackAddress() && ip.getHostAddress().indexOf(":") == -1) {
//当IP地址不是地区本地地址, 直接取主机IP
                        localIp = ip.getHostAddress();
                    }
                }
            }

            if (netIp != null && !"".equals(netIp)) {
                return netIp;
            } else {
                return localIp;
            }
        } catch (SocketException ex) {
            throw new RpcException(ex);
        }
    }
}

  • 全局分布式ID封装
    snowflake算法:
    snowflake是Twitter开源的分布式ID生成算法,结果是一个long型的ID。其核心思想是:使用41bit作为毫秒数,10bit作为机器的ID(5个bit是数据中心,5个bit的机器ID),12bit作为毫秒内的流水号(意味着每个节点在每毫秒可以产生 4096 个 ID),最后还有一个符号位,永远是0。
    在这里插入图片描述
  • 第一位:

占用1bit,其值始终是0,预置为0。

  • 时间戳

占用41bit,精确到毫秒,总共可以容纳约140年的时间。

  • 工作机器ID

占用10bit,其中高位5bit是数据中心ID(datacenterId),低位5bit是工作节点ID(workerId),做多可以容纳1024个节点。

  • 序列号

占用12bit,这个值在同一毫秒同一节点上从0开始不断累加,最多可以累加到4095。
snowflake同一秒内能生成的全局唯一不重复ID为2^10 * 2^12 = 1024 * 4096 = 4194304, 完全可以满足高并发的场景使用要求。
代码实现, GlobalIDGenerator:


import java.lang.management.ManagementFactory;
import java.net.InetAddress;
import java.net.NetworkInterface;

/**
 * <p>名称:GlobalIDGenerator.java</p>
 * <p>描述:分布式自增长ID</p>
 * <pre>
 *     Twitter的 Snowflake JAVA实现方案
 * </pre>
 * 核心代码为其GlobalIDGenerator这个类实现,其原理结构如下,我分别用一个0表示一位,用—分割开部分的作用:
 * 1||0---0000000000 0000000000 0000000000 0000000000 0 --- 00000 ---00000 ---000000000000
 * 在上面的字符串中,第一位为未使用(实际上也可作为long的符号位),接下来的41位为毫秒级时间,
 * 然后5位datacenter标识位,5位机器ID(并不算标识符,实际是为线程标识),
 * 然后12位该毫秒内的当前毫秒内的计数,加起来刚好64位,为一个Long型。
 * 这样的好处是,整体上按照时间自增排序,并且整个分布式系统内不会产生ID碰撞(由datacenter和机器ID作区分),
 * 并且效率较高,经测试,snowflake每秒能够产生26万ID左右,完全满足需要。
 * <p>
 * 64位ID (42(毫秒)+5(机器ID)+5(业务编码)+12(重复累加))
 * 生成ID示例;
 * 1154628864413139070
 * 1154628864413139071
 * 1154628864413139072
 * 1154628864413139073
 */
public class GlobalIDGenerator {

    // 时间起始标记点,作为基准,一般取系统的最近时间(一旦确定不能变动)
    private final static long twepoch = 1288834974657L;
    // 机器标识位数
    private final static long workerIdBits = 5L;
    // 数据中心标识位数
    private final static long datacenterIdBits = 5L;
    // 机器ID最大值
    private final static long maxWorkerId = -1L ^ (-1L << workerIdBits);
    // 数据中心ID最大值
    private final static long maxDatacenterId = -1L ^ (-1L << datacenterIdBits);
    // 毫秒内自增位
    private final static long sequenceBits = 12L;
    // 机器ID偏左移12位
    private final static long workerIdShift = sequenceBits;
    // 数据中心ID左移17位
    private final static long datacenterIdShift = sequenceBits + workerIdBits;
    // 时间毫秒左移22位
    private final static long timestampLeftShift = sequenceBits + workerIdBits + datacenterIdBits;

    private final static long sequenceMask = -1L ^ (-1L << sequenceBits);
    /* 上次生产id时间戳 */
    private static long lastTimestamp = -1L;
    // 0,并发控制
    private long sequence = 0L;

    // 机器标识id, 分布式服务需设置不同编号, 不能超过31
    private final long workerId;
    // 数据标识id部分, 业务编码, 不能超过31
    private final long datacenterId;

    private static class SingletonGlobalIDGenerator {
        // 单例实现, 默认workerId为1, datacenterId为1
        private static GlobalIDGenerator instance = new GlobalIDGenerator(1, 1);
    }


    public GlobalIDGenerator() {
        this.datacenterId = getDatacenterId(maxDatacenterId);
        this.workerId = getMaxWorkerId(datacenterId, maxWorkerId);
    }

    /**
     * @param workerId     工作机器ID
     * @param datacenterId 序列号
     */
    public GlobalIDGenerator(long workerId, long datacenterId) {
        if (workerId > maxWorkerId || workerId < 0) {
            throw new IllegalArgumentException(String.format("worker Id can't be greater than %d or less than 0", maxWorkerId));
        }
        if (datacenterId > maxDatacenterId || datacenterId < 0) {
            throw new IllegalArgumentException(String.format("datacenter Id can't be greater than %d or less than 0", maxDatacenterId));
        }
        this.workerId = workerId;
        this.datacenterId = datacenterId;
    }

    /**
     * 获取单例实例
     * @return
     */
    public static GlobalIDGenerator getInstance() {
        return SingletonGlobalIDGenerator.instance;
    }

    /**
     * 获取下一个ID
     *
     * @return
     */
    public synchronized long nextId() {
        long timestamp = timeGen();
        if (timestamp < lastTimestamp) {
            throw new RuntimeException(String.format("Clock moved backwards.  Refusing to generate id for %d milliseconds", lastTimestamp - timestamp));
        }

        if (lastTimestamp == timestamp) {
            // 当前毫秒内,则+1
            sequence = (sequence + 1) & sequenceMask;
            if (sequence == 0) {
                // 当前毫秒内计数满了,则等待下一秒
                timestamp = tilNextMillis(lastTimestamp);
            }
        } else {
            sequence = 0L;
        }
        lastTimestamp = timestamp;
        // ID偏移组合生成最终的ID,并返回ID
        long nextId = ((timestamp - twepoch) << timestampLeftShift)
                | (datacenterId << datacenterIdShift)
                | (workerId << workerIdShift) | sequence;

        return nextId;
    }

    /**
     * 获取下一个ID, 字符形式
     * @return
     */
    public synchronized  String nextStrId(){
        return String.valueOf(nextId());
    }

    private long tilNextMillis(final long lastTimestamp) {
        long timestamp = this.timeGen();
        while (timestamp <= lastTimestamp) {
            timestamp = this.timeGen();
        }
        return timestamp;
    }

    private long timeGen() {
        return System.currentTimeMillis();
    }

    /**
     * <p>
     * 获取 maxWorkerId
     * </p>
     */
    protected static long getMaxWorkerId(long datacenterId, long maxWorkerId) {
        StringBuffer mpid = new StringBuffer();
        mpid.append(datacenterId);
        String name = ManagementFactory.getRuntimeMXBean().getName();
        if (!name.isEmpty()) {
            /*
             * GET jvmPid
             */
            mpid.append(name.split("@")[0]);
        }
        /*
         * MAC + PID 的 hashcode 获取16个低位
         */
        return (mpid.toString().hashCode() & 0xffff) % (maxWorkerId + 1);
    }

    /**
     * <p>
     * 数据标识id部分
     * </p>
     */
    protected static long getDatacenterId(long maxDatacenterId) {
        long id = 0L;
        try {
            InetAddress ip = InetAddress.getLocalHost();
            NetworkInterface network = NetworkInterface.getByInetAddress(ip);
            if (network == null) {
                id = 1L;
            } else {
                byte[] mac = network.getHardwareAddress();
                id = ((0x000000FF & (long) mac[mac.length - 1])
                        | (0x0000FF00 & (((long) mac[mac.length - 2]) << 8))) >> 6;
                id = id % (maxDatacenterId + 1);
            }
        } catch (Exception e) {
            System.out.println(" getDatacenterId: " + e.getMessage());
        }
        return id;
    }

    public static void main(String[] args) {
        GlobalIDGenerator id = new GlobalIDGenerator(0, 1);
        for (int i = 0; i < 100000; i++) {
            System.err.println(id.nextId());
        }
    }
}

2.4 RPC之公用接口实现

负责所有RPC接口的定义,客户端与服务端工程都需要引用, 这里提供了订单服务接口。
需要增加@RpcClient客户端注解标识, 用于客户端工程对RPC接口的扫描
在这里插入图片描述

2.5 RPC之客户端实现

客户端整体功能实现:
在这里插入图片描述

2.5.1 客户端Netty通讯配置

RPC底层采用Netty实现通讯,实现高性能传输, 具体的代码实现:
RpcRequestManager发送客户端请求接口:

/**
 * 发送客户端请求
 *
 * @param rpcRequest
 * @throws InterruptedException
 * @throws RpcException
 */
public static RpcResponse sendRequest(RpcRequest rpcRequest)throws
        InterruptedException,RpcException{
// 1. 从缓存中获取RPC服务列表信息 
        List<ProviderService> providerServices=
        SERVICE_ROUTE_CACHE.getServiceRoutes(rpcRequest.getClassName());
// 2. 从服务列表中获取第一个服务信息 
        ProviderService targetServiceProvider=providerServices.get(0);
        if(targetServiceProvider!=null){
        String requestId=rpcRequest.getRequestId();
// 3. 发起远程调用 
        RpcResponse response=requestByNetty(rpcRequest,
        targetServiceProvider);
        LOGGER.info("Send request[{}:{}] to service provider successfully",
        requestId,rpcRequest.toString());
        return response;
        }else{
        throw new RpcException(StatusEnum.NOT_FOUND_SERVICE_PROVINDER);
        }
        }

远程调用requestByNetty具体实现:

/**
     * 采用Netty进行远程调用
     */
    public static RpcResponse requestByNetty(RpcRequest rpcRequest, ProviderService providerService) {

        // 1. 创建Netty连接配置
        EventLoopGroup worker = new NioEventLoopGroup();
        Bootstrap bootstrap = new Bootstrap();
        bootstrap.group(worker)
                .channel(NioSocketChannel.class)
                .remoteAddress(providerService.getServerIp(), providerService.getNetworkPort())
                .handler(rpcClientInitializer);
        try {
            // 2. 建立连接
            ChannelFuture future = bootstrap.connect().sync();
            if (future.isSuccess()) {
                ChannelHolder channelHolder = ChannelHolder.builder()
                        .channel(future.channel())
                        .eventLoopGroup(worker)
                        .build();
                LOGGER.info("Construct a connector with service provider[{}:{}] successfully",
                        providerService.getServerIp(),
                        providerService.getNetworkPort()
                );

                // 3. 创建请求回调对象
                final RequestFuture<RpcResponse> responseFuture = new SyncRequestFuture(rpcRequest.getRequestId());
                // 4. 将请求回调放置缓存
                SyncRequestFuture.syncRequest.put(rpcRequest.getRequestId(), responseFuture);
                // 5. 根据连接通道, 下发请求信息
                ChannelFuture channelFuture = channelHolder.getChannel().writeAndFlush(rpcRequest);
                // 6. 建立回调监听
                channelFuture.addListener(new ChannelFutureListener() {
                    @Override
                    public void operationComplete(ChannelFuture future) throws Exception {
                        // 7. 设置是否成功的标记
                        responseFuture.setWriteResult(future.isSuccess());
                        if(!future.isSuccess()) {
                            // 调用失败,清除连接缓存
                            SyncRequestFuture.syncRequest.remove(responseFuture.requestId());
                        }
                    }
                });
                // 8. 阻塞等待3秒
                RpcResponse result = responseFuture.get(3, TimeUnit.SECONDS);
                // 9. 移除连接缓存
                SyncRequestFuture.syncRequest.remove(rpcRequest.getRequestId());

                return result;
            }
        } catch (Exception ex) {
            ex.printStackTrace();
        }

        return null;
    }

这里是采用同步方式进行调用, 定义SyncRequestFuture请求回调对象, 继承Future接口:
在上面的RPC同步请求中需要使用, 并且会通过缓存方式记录回调信息。

package com.itcast.rpc.client.runner;

import com.itcast.common.data.RpcResponse;

import java.util.Map;
import java.util.concurrent.*;

/**
 * <p>Description: </p>
 * @date 
 * @author 
 * @version 1.0
 * <p>Copyright:Copyright(c)2020</p>
 */
public class SyncRequestFuture implements RequestFuture<RpcResponse> {

    // 请求回调缓存
    public static Map<String, RequestFuture> syncRequest = new ConcurrentHashMap<String, RequestFuture>();
    // 计数器
    private CountDownLatch latch = new CountDownLatch(1);
    // 标记开始时间, 判断是否超时
    private final long begin = System.currentTimeMillis();
    // 超时时间设定
    private long timeout;
    // rpc响应对象
    private RpcResponse response;
    // 请求ID
    private final String requestId;
    // 标记是否有回调结果
    private boolean writeResult;
    // 调用异常记录
    private Throwable cause;
    // 标记调用是否超时
    private boolean isTimeout = false;

    public SyncRequestFuture(String requestId) {
        this.requestId = requestId;
    }

    /**
     * 构造方法
     * @param requestId
     * @param timeout
     */
    public SyncRequestFuture(String requestId, long timeout) {
        this.requestId = requestId;
        this.timeout = timeout;
        writeResult = true;
        isTimeout = false;
    }

    /**
     * 获取异常栈信息
     * @return
     */
    public Throwable cause() {
        return cause;
    }

    /**
     * 设置异常栈信息
     * @param cause
     */
    public void setCause(Throwable cause) {
        this.cause = cause;
    }

    /**
     * 标记是否成功接收到回调结果
     * @return
     */
    public boolean isWriteSuccess() {
        return writeResult;
    }

    /**
     * 标记回调结果
     * @param result
     */
    public void setWriteResult(boolean result) {
        this.writeResult = result;
    }

    /**
     * 获取请求ID
     * @return
     */
    public String requestId() {
        return requestId;
    }

    /**
     * 获取响应结果
     * @return
     */
    public RpcResponse response() {
        return response;
    }

    /**
     * 设置响应结果信息
     * @param response
     */
    public void setResponse(RpcResponse response) {
        this.response = response;
        latch.countDown();
    }

    /**
     * 取消调用
     * @param mayInterruptIfRunning
     * @return
     */
    public boolean cancel(boolean mayInterruptIfRunning) {
        return true;
    }

    /**
     * 标记是否取消
     * @return
     */
    public boolean isCancelled() {
        return false;
    }

    /**
     * 标记是否完成
     * @return
     */
    public boolean isDone() {
        return false;
    }

    /**
     * 获取响应结果(阻塞式)
     * @return
     * @throws InterruptedException
     * @throws ExecutionException
     */
    public RpcResponse get() throws InterruptedException, ExecutionException {
        latch.wait();
        return response;
    }

    /**
     * 获取响应结果(指定阻塞等待时间)
     * @param timeout
     * @param unit
     * @return
     * @throws InterruptedException
     * @throws ExecutionException
     * @throws TimeoutException
     */
    public RpcResponse get(long timeout, TimeUnit unit) throws InterruptedException, ExecutionException, TimeoutException {
        if (latch.await(timeout, unit)) {
            return response;
        }
        return null;
    }

    /**
     * 标记请求调用是否超时
     * @return
     */
    public boolean isTimeout() {
        if (isTimeout) {
            return isTimeout;
        }
        return System.currentTimeMillis() - begin > timeout;
    }
}

请求调用处理完成之后, 还需要在客户端接收数据时, 通知请求回调完成。
RpcResponseHandler的代码实现:


import com.itcast.common.data.RpcResponse;
import com.itcast.rpc.client.runner.RpcRequestPool;
import io.netty.channel.ChannelHandler;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.SimpleChannelInboundHandler;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;

/**
 * Rpc数据接收响应处理器
 */
@Component
@ChannelHandler.Sharable
public class RpcResponseHandler extends SimpleChannelInboundHandler<RpcResponse> {

    @Autowired
    private RpcRequestPool requestPool;

    @Override
    protected void channelRead0(ChannelHandlerContext ctx, RpcResponse znsResponse) throws Exception {
        // 数据接收, 通知请求回调
        requestPool.notifyRequest(znsResponse.getRequestId(), znsResponse);
    }
}

上面就是客户端的整个调用流程实现, 请求数据是如何实现序列化传输的?

序列化编码,RpcClientEncodeHandler实现, 这里是继承了MessageToByteEncoder对象:

import com.itcast.common.data.RpcRequest;
import com.itcast.common.utils.ProtoSerializerUtil;
import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.MessageToByteEncoder;

/**
 * Rpc客户端编码器
 */
public class RpcClientEncodeHandler extends MessageToByteEncoder<RpcRequest> {

    @Override
    protected void encode(ChannelHandlerContext ctx, RpcRequest RpcRequest, ByteBuf in) throws Exception {
        // 调用封装的protostuff公用组件, 实现序列化
        byte[] bytes = ProtoSerializerUtil.serialize(RpcRequest);
        in.writeInt(bytes.length);
        in.writeBytes(bytes);
    }
}

反序列化, RpcClientDecodeHandler实现, 继承ByteToMessageDecoder对象

import com.itcast.common.data.RpcResponse;
import com.itcast.common.utils.JsonSerializerUtil;
import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.ByteToMessageDecoder;

import java.util.List;

/**
 * 客户端解码器
 */
public class RpcClientDecodeHandler extends ByteToMessageDecoder {

    @Override
    protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> list) throws Exception {
        if (in.readableBytes() <= 4) {
            return;
        }

        int length = in.readInt();
        in.markReaderIndex();
        if (in.readableBytes() < length) {
            in.resetReaderIndex();
        } else {
            byte[] bytes = new byte[in.readableBytes()];
            in.readBytes(bytes);
            RpcResponse znsResponse = JsonSerializerUtil.deserialize(bytes, RpcResponse.class);
            list.add(znsResponse);
        }
    }
}

如果将来要修改, 采用其他序列化方式, 可以统一修改ProtoSerializerUtil.deserialize实现即可。

2.5.2 动态代理配置实现

这里动态代理是采用cglib做的实现, 负责对RPC接口进行代理与方法拦截。
动态代理拦截处理器, ProxyHelper:
提供通过代理创建实例的接口, 内部通过拦截方式,发起远程RPC调用

import com.itcast.common.data.RpcRequest;
import com.itcast.common.data.RpcResponse;
import com.itcast.common.utils.RequestIdUtil;
import com.itcast.rpc.client.runner.RpcRequestManager;
import com.itcast.rpc.client.runner.RpcRequestPool;
import net.sf.cglib.proxy.Enhancer;
import net.sf.cglib.proxy.MethodInterceptor;
import net.sf.cglib.proxy.MethodProxy;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;

import java.lang.reflect.Method;

/**
 * 动态代理拦截处理
 */
@Component
public class ProxyHelper {

    @Autowired
    private RpcRequestPool rpcRequestPool;

    public <T> T newProxyInstance(Class<T> cls) {
        Enhancer enhancer = new Enhancer();
        enhancer.setSuperclass(cls);
        enhancer.setCallback(new ProxyCallBackHandler());
        return (T) enhancer.create();
    }

    class ProxyCallBackHandler implements MethodInterceptor {

        @Override
        public Object intercept(Object o, Method method, Object[] args, MethodProxy methodProxy) throws Throwable {
            return doIntercept(method, args);
        }

        /**
         * 拦截RCP接口调用
         * @param method
         * @param parameters
         * @return
         * @throws Throwable
         */
        private Object doIntercept(Method method, Object[] parameters) throws Throwable {
            String requestId = RequestIdUtil.requestId();
            String className = method.getDeclaringClass().getName();
            String methodName = method.getName();
            Class<?>[] parameterTypes = method.getParameterTypes();
            // 1. 构建RPC请求信息
            RpcRequest znsRequest = RpcRequest.builder()
                    .requestId(requestId)
                    .className(className)
                    .methodName(methodName)
                    .parameterTypes(parameterTypes)
                    .parameters(parameters)
                    .build();
            // 2. 采用异步方式请求调用
            RpcRequestManager.sendRequest(znsRequest);
            // 3. 通过RCP请求连接记录, 获取调用结果
            RpcResponse znsResponse = rpcRequestPool.fetchResponse(requestId);
            if (znsResponse == null) {
                return null;
            }

            if (znsResponse.isError()) {
                throw znsResponse.getCause();
            }
            // 4. 返回请求调用结果
            return znsResponse.getResult();
        }
    }
}

实现流程:

  1. 构建RPC请求信息
  2. 采用同步方式请求调用
  3. 返回请求调用结果
    有了动态的实现, 还需要动态代理扫描管理器, ServiceProxyManager:
import com.itcast.common.annotation.RpcClient;
import com.itcast.common.utils.SpringBeanFactory;
import com.itcast.rpc.client.config.RpcClientConfiguration;
import org.apache.commons.collections.CollectionUtils;
import org.reflections.Reflections;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.support.DefaultListableBeanFactory;
import org.springframework.stereotype.Component;

import java.util.Set;

/**
 * 动态代理管理器
 */
@Component
public class ServiceProxyManager {

    private static final Logger LOGGER = LoggerFactory.getLogger(ServiceProxyManager.class);

    @Autowired
    private RpcClientConfiguration configuration;

    @Autowired
    private ProxyHelper proxyHelper;

    public void initServiceProxyInstance() {
        Reflections reflections = new Reflections(configuration.getRpcClientApiPackage());
        Set<Class<?>> typesAnnotatedWith = reflections.getTypesAnnotatedWith(RpcClient.class);
        if (CollectionUtils.isEmpty(typesAnnotatedWith)) {
            return;
        }

        DefaultListableBeanFactory beanFactory = (DefaultListableBeanFactory) SpringBeanFactory.context()
                .getAutowireCapableBeanFactory();
        for (Class<?> cls : typesAnnotatedWith) {
            RpcClient znsClient = cls.getAnnotation(RpcClient.class);
            String serviceName = cls.getName();
            beanFactory.registerSingleton(serviceName, proxyHelper.newProxyInstance(cls));
        }

        LOGGER.info("Initialize proxy for service successfully");
    }
}

主要实现流程:

  1. 创建反射信息, 指定RPC扫描信息
  2. 根据PRC注解标识, 获取类信息
  3. 获取容器bean工厂
  4. 遍历扫描的RPC类信息
  5. 获取RpcClient注解信息
  6. 初始化并注册实例
2.5.3 ZK服务订阅实现

客户端服务订阅的具体代码实现:

import com.itcast.common.annotation.RpcClient;
import com.itcast.rpc.client.cache.ServiceRouteCache;
import com.itcast.rpc.client.channel.ProviderService;
import com.itcast.rpc.client.config.RpcClientConfiguration;
import org.apache.commons.collections.CollectionUtils;
import org.reflections.Reflections;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;

import java.util.List;
import java.util.Set;

/**
 * 注册服务拉取管理器
 */
@Component
public class ServicePullManager {

    private static final Logger LOGGER = LoggerFactory.getLogger(ServicePullManager.class);

    @Autowired
    private ZKit zKit;

    @Autowired
    private ServiceRouteCache serviceRouteCache;

    @Autowired
    private RpcClientConfiguration configuration;

    public void pullServiceFromZK() {
        Reflections reflections = new Reflections(configuration.getRpcClientApiPackage());
        Set<Class<?>> typesAnnotatedWith = reflections.getTypesAnnotatedWith(RpcClient.class);
        if (CollectionUtils.isEmpty(typesAnnotatedWith)) {
            return;
        }
        for (Class<?> cls : typesAnnotatedWith) {
            String serviceName = cls.getName();

            // Cache service provider list into local
            List<ProviderService> providerServices = zKit.getServiceInfos(serviceName);
            serviceRouteCache.addCache(serviceName, providerServices);

            // Add listener for service node
            zKit.subscribeZKEvent(serviceName);
        }

        LOGGER.info("Pull service address list from zookeeper successfully");
    }
}

这里为了提升服务信息的获取性,避免每次与ZK服务端建立连接,加入了缓存处理。

  1. 扫描获取RPC接口
  2. 将拉取ZK服务信息放置缓存
  3. 注册订阅ZK服务节点事件
    ZK客户端的实现
mport com.google.common.collect.Lists;
import com.itcast.rpc.client.cache.ServiceRouteCache;
import com.itcast.rpc.client.channel.ProviderService;
import com.itcast.rpc.client.config.RpcClientConfiguration;
import org.I0Itec.zkclient.IZkChildListener;
import org.I0Itec.zkclient.ZkClient;
import org.apache.commons.collections.CollectionUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;

import java.util.List;
import java.util.stream.Collectors;

@Component
public class ZKit {

    @Autowired
    private RpcClientConfiguration configuration;

    @Autowired
    private ZkClient zkClient;

    @Autowired
    private ServiceRouteCache serviceRouteCache;

    /**
     * 服务订阅
     * @param serviceName
     */
    public void subscribeZKEvent(String serviceName) {
        String path = configuration.getZkRoot() + "/" + serviceName;
        zkClient.subscribeChildChanges(path, new IZkChildListener() {
            @Override
            public void handleChildChange(String parentPath, List<String> list) throws Exception {
                if (CollectionUtils.isNotEmpty(list)) {
                    List<ProviderService> providerServices = convertToProviderService(list);
                    serviceRouteCache.updateCache(serviceName, providerServices);
                }
            }
        });
    }

    public List<ProviderService> getServiceInfos(String serviceName) {
        String path = configuration.getZkRoot() + "/" + serviceName;
        List<String> children = zkClient.getChildren(path);

        List<ProviderService> providerServices = convertToProviderService(children);
        return providerServices;
    }

    private List<ProviderService> convertToProviderService(List<String> list) {
        if (CollectionUtils.isEmpty(list)) {
            return Lists.newArrayListWithCapacity(0);
        }
        List<ProviderService> providerServices = list.stream().map(v -> {
            String[] serviceInfos = v.split(":");
            return ProviderService.builder()
                    .serverIp(serviceInfos[0])
                    .serverPort(Integer.parseInt(serviceInfos[1]))
                    .networkPort(Integer.parseInt(serviceInfos[2]))
                    .build();
        }).collect(Collectors.toList());
        return providerServices;
    }
}

实现流程:

  1. 组装服务节点信息。
  2. 订阅服务节点。
  3. 判断获取的节点信息,是否为空。
  4. 将服务端获取的信息, 转换为服务记录对象。
  5. 更新服务记录缓存信息。

3. RPC之服务端实现

在这里插入图片描述

3.1服务端配置

定义服务端配置, 声明ZK客户端:

import org.I0Itec.zkclient.ZkClient;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class BeanConfig {

    /**
     * RPC服务端配置
     */
    @Autowired
    private RpcServerConfiguration rpcServerConfiguration;

    /**
     * 声音ZK客户端
     * @return
     */
    @Bean
    public ZkClient zkClient() {
        return new ZkClient(rpcServerConfiguration.getZkAddr(), rpcServerConfiguration.getConnectTimeout());
    }
}

RPC服务端配置信息:


import lombok.Data;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;

@Data
@Component
public class RpcServerConfiguration {

    /**
     * ZK根节点名称
     */
    @Value("${rpc.server.zk.root}")
    private String zkRoot;

    /**
     * ZK地址信息
     */
    @Value("${rpc.server.zk.addr}")
    private String zkAddr;


    /**
     * RPC通讯端口
     */
    @Value("${rpc.network.port}")
    private int networkPort;

    /**
     * Spring Boot 服务端口
     */
    @Value("${server.port}")
    private int serverPort;

    /**
     * ZK连接超时时间配置
     */
    @Value("${rpc.server.zk.timeout:10000}")
    private int connectTimeout;
}

3.2 服务端Netty通讯配置

1. 服务端Netty数据接收处理器

import com.itcast.common.data.RpcRequest;
import com.itcast.common.data.RpcResponse;
import com.itcast.common.utils.SpringBeanFactory;
import io.netty.channel.ChannelHandler;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.SimpleChannelInboundHandler;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Component;

import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;

/**
 * Rpc的服务端数据接收处理
 */
@Component
@ChannelHandler.Sharable
public class RpcRequestHandler extends SimpleChannelInboundHandler<RpcRequest> {

    private static final Logger LOGGER = LoggerFactory.getLogger(RpcRequestHandler.class);

    /**
     * 数据接收处理
     * @param ctx
     * @param request
     * @throws Exception
     */
    @Override
    protected void channelRead0(ChannelHandlerContext ctx, RpcRequest request) throws Exception {
        //3、封装响应对象->RpcResponse
        RpcResponse response = new RpcResponse();

        //1、解析传过来的数据RpcRequest->ClassName->com.itcast.rpc...->Method->parameter..
        String requestId = request.getRequestId();
        String className = request.getClassName();
        String methodName = request.getMethodName();
        Class<?>[] parameterTypes = request.getParameterTypes();
        Object[] parameters = request.getParameters();

        try {
            //2、找到Class对应的方法执行反射调用
            Object targetClass = SpringBeanFactory.getBean(Class.forName(className));
            Method targetMethod = targetClass.getClass().getMethod(methodName, parameterTypes);
            Object result = targetMethod.invoke(targetClass, parameters);

            //3、封装响应对象->RpcResponse
            response.setRequestId(requestId);
            response.setResult(result);
        } catch (Throwable e) {
            response.setCause(e);
        }
        //4、响应结果
        ctx.writeAndFlush(response);
    }
}

实现流程:

  1. 定义RPC返回对象

  2. 定义请求数据信息

  3. 获取服务实现类

  4. 获取实现类的方法

  5. 通过反射机制调用方法

  6. 设置返回结果

  7. 设置异常信息

  8. 输出返回结果

  9. 服务Netty初始化配置


import com.itcast.rpc.server.connector.handler.RpcRequestHandler;
import com.itcast.rpc.server.connector.handler.RpcServerDecodeHandler;
import com.itcast.rpc.server.connector.handler.RpcServerEncodeHandler;
import io.netty.channel.Channel;
import io.netty.channel.ChannelHandler;
import io.netty.channel.ChannelInitializer;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;

/**
 * 服务端Netty连接初始化配置
 */
@Component
@ChannelHandler.Sharable
public class RpcServerInitializer extends ChannelInitializer<Channel> {

    @Autowired
    private RpcRequestHandler znsRequestHandler;

    /**
     * 初始化连接通道
     * @param channel
     * @throws Exception
     */
    @Override
    protected void initChannel(Channel channel) throws Exception {
        channel.pipeline()
                .addLast(new RpcServerDecodeHandler())
                .addLast(new RpcServerEncodeHandler())
                .addLast(znsRequestHandler);
    }
}
  1. 服务编码配置
import com.itcast.common.data.RpcResponse;
import com.itcast.common.utils.ProtoSerializerUtil;
import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.MessageToByteEncoder;

/**
 * 服务端编码器
 */
public class RpcServerEncodeHandler extends MessageToByteEncoder<RpcResponse> {

    /**
     * 编码接口
     * @param ctx
     * @param znsResponse
     * @param byteBuf
     * @throws Exception
     */
    @Override
    protected void encode(ChannelHandlerContext ctx, RpcResponse znsResponse, ByteBuf byteBuf)
            throws Exception {
        // 通过Protostuff实现编码接口
        byte[] bytes = ProtoSerializerUtil.serialize(znsResponse);
        byteBuf.writeInt(bytes.length);
        byteBuf.writeBytes(bytes);
    }
}

  1. 服务解码配置
import com.itcast.common.data.RpcRequest;
import com.itcast.common.utils.ProtoSerializerUtil;
import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.ByteToMessageDecoder;

import java.util.List;

/**
 * 服务端解码器
 */
public class RpcServerDecodeHandler extends ByteToMessageDecoder {

    /**
     * 解码接口实现
     * @param ctx
     * @param in
     * @param list
     * @throws Exception
     */
    @Override
    protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> list) throws
            Exception {
        if (in.readableBytes() <= 4) {
            return;
        }

        int length = in.readInt();
        in.markReaderIndex();
        if (in.readableBytes() < length) {
            in.resetReaderIndex();
        } else {
            byte[] bytes = new byte[in.readableBytes()];
            in.readBytes(bytes);
            // 通过Protostuff实现解码
            RpcRequest znsRequest = ProtoSerializerUtil.deserialize(bytes, RpcRequest.class);
            list.add(znsRequest);
        }
    }
}
  1. 服务启动配置
import com.itcast.common.utils.SpringBeanFactory;
import com.itcast.rpc.server.config.RpcServerConfiguration;
import io.netty.bootstrap.ServerBootstrap;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelOption;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.nio.NioServerSocketChannel;
import io.netty.handler.logging.LogLevel;
import io.netty.handler.logging.LoggingHandler;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

/**
 * Rpc服务端连接接收器
 */
public class RpcServerAcceptor implements Runnable {

    private static final Logger LOGGER = LoggerFactory.getLogger(RpcServerAcceptor.class);

    private EventLoopGroup boss = new NioEventLoopGroup();
    private EventLoopGroup worker = new NioEventLoopGroup();

    private RpcServerConfiguration znsServerConfiguration;
    private RpcServerInitializer znsServerInitializer;

    public RpcServerAcceptor() {
        this.znsServerConfiguration = SpringBeanFactory.getBean(RpcServerConfiguration.class);
        this.znsServerInitializer = SpringBeanFactory.getBean(RpcServerInitializer.class);
    }

    /**
     * Netty通讯服务启动
     */
    @Override
    public void run() {
        // 1. Netty服务配置
        ServerBootstrap bootstrap = new ServerBootstrap();
        bootstrap.group(boss, worker)
                .channel(NioServerSocketChannel.class)
                .handler(new LoggingHandler(LogLevel.DEBUG))
                .option(ChannelOption.SO_BACKLOG, 1024)
                .childOption(ChannelOption.SO_KEEPALIVE, true)
                .childHandler(znsServerInitializer);

        try {
            LOGGER.info("ZnsServer acceptor startup at port[{}] successfully", znsServerConfiguration.getNetworkPort());
            // 2. 绑定端口, 启动服务
            ChannelFuture future = bootstrap.bind(znsServerConfiguration.getNetworkPort()).sync();
            // 3. 服务同步阻塞方式运行
            future.channel().closeFuture().sync();
        } catch (InterruptedException e) {
            LOGGER.error("ZnsServer acceptor startup failure!", e);
            e.printStackTrace();
        } finally {
            boss.shutdownGracefully().syncUninterruptibly();
            worker.shutdownGracefully().syncUninterruptibly();
        }
    }
}

实现流程:

  1. Netty服务配置
  2. 绑定端口, 启动服务
  3. 服务同步阻塞方式运行

3.3 服务端ZK注册实现

  1. 服务端注册实现
import com.itcast.common.annotation.RpcService;
import com.itcast.common.utils.IpUtil;
import com.itcast.common.utils.SpringBeanFactory;
import com.itcast.rpc.server.config.RpcServerConfiguration;
import org.apache.commons.collections.MapUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;

import java.util.Map;

/**
 * Zookeeper服务连接注册管理
 */
@Component
public class ServicePushManager {

    private static final Logger LOGGER = LoggerFactory.getLogger(ServicePushManager.class);

    @Autowired
    private ZKit zKit;

    @Autowired
    private RpcServerConfiguration configuration;

    /**
     * 服务注册接口
     */
    public void registerIntoZK() {
        //1.找到所有有@RpcService注解的类
        Map<String, Object> beanListByAnnotationClass =
                SpringBeanFactory.getBeanListByAnnotationClass(RpcService.class);

        if(!MapUtils.isEmpty(beanListByAnnotationClass)){
            //根节点创建
            zKit.createRootNode();

            for (Object bean : beanListByAnnotationClass.values()) {
                //2.获取每个类上的注解@RpcService.cls的值
                RpcService annotation = bean.getClass().getAnnotation(RpcService.class);
                Class<?> clazz = annotation.cls();

                //3.获取cls的之后,将它的name作为节点名字,注册到Zookeeper中(在rpc节点下创建一个子节点)
                String serviceName = clazz.getName();

                //创建接口对应的节点
                zKit.createPersistentNode(serviceName);

                //3.同时为每个节点创建一个子节点  IP:HttpPort:RpcPort
                String serviceAddress =
                        IpUtil.getRealIp()+
                        ":"+configuration.getServerPort()+
                        ":"+configuration.getNetworkPort();
                zKit.createNode(serviceName+"/"+serviceAddress);
            }
        }

    }

}

流程:

  1. 扫描所有Rpc接口服务。
  2. 将接口服务信息注册至ZK。

4.流程回顾

在这里插入图片描述

在这里插入图片描述

  • 2
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值