GRPC server启动流程简单梳理
一、rpc和grpc
1、rpc (remote procedure call) : 使得应用程序之间可以进行通信,而且也遵从server/client模型。使用的时候客户端调用server端提供的接口就像是调用本地的函数一样.
2、grpc : google提供的rpc框架
grpc 通信结构图如下:
grpc 和 restful API 比较
①:grpc可以通过protobuf定义接口,可以有更加严格的接口约束
②:protobuf 通过二进制传输,提高传输效率
③:gRPC可以方便地支持流式通信
二、server启动
项目启动代码如下
ServerBuilder.forPort(port).addService(commonService).build().start();
1、forPort(port)
ServerBuilder是一个抽象类,不同的服务提供方(Provider),将继承实现它。ServerProvider的作用就是找到不同的provider。grpc项目中NettyServerProvider继承了ServerProvider。根据返回的provider调用对应实现的builderForPort()方法。
ServerBuilder
----->>
public static ServerBuilder<?> forPort(int port) {
return ServerProvider.provider().builderForPort(port);
}
-----------------------------------------------------------------------------------
NettyServerProvider extends ServerProvider
---------->>
protected NettyServerBuilder builderForPort(int port) {
return NettyServerBuilder.forPort(port);
}
经过forPort(port)方法返回NettyServerBuilder 对象,并绑定指定端口
时序图:
provider(): 通过静态变量ServerProvider provider 方法获中取,load 处理逻辑是获取ServerProvider所有实现,并按照优先级进行排序,并返回list集合中的第一个元素。
代码如下:
ServerProvider
----->>
private static final ServerProvider provider = ServiceProviders.load(
ServerProvider.class, //
Collections.<Class<?>>emptyList(),
ServerProvider.class.getClassLoader(),
new PriorityAccessor<ServerProvider>() {
@Override
public boolean isAvailable(ServerProvider provider) {
return provider.isAvailable();
}
@Override
public int getPriority(ServerProvider provider) {
return provider.priority();
}
});
-------------------------------------------------------------------
ServerProviders
----->>
/**
* If this is not Android, returns the highest priority implementation of the class via
* {@link ServiceLoader}.
* If this is Android, returns an instance of the highest priority class in {@code hardcoded}.
*/
public static <T> T load(
Class<T> klass,
Iterable<Class<?>> hardcoded,
ClassLoader cl,
PriorityAccessor<T> priorityAccessor) {
List<T> candidates = loadAll(klass, hardcoded, cl, priorityAccessor);
if (candidates.isEmpty()) {
return null;
}
return candidates.get(0);
}
/**
* If this is not Android, returns all available implementations discovered via
* {@link ServiceLoader}.
* If this is Android, returns all available implementations in {@code hardcoded}.
* The list is sorted in descending priority order.
*/
public static <T> List<T> loadAll(
Class<T> klass,
Iterable<Class<?>> hardcoded,
ClassLoader cl,
final PriorityAccessor<T> priorityAccessor) {
Iterable<T> candidates;
if (isAndroid(cl)) {
candidates = getCandidatesViaHardCoded(klass, hardcoded);
} else {
candidates = getCandidatesViaServiceLoader(klass, cl);
}
List<T> list = new ArrayList<>();
for (T current: candidates) {
if (!priorityAccessor.isAvailable(current)) {
continue;
}
list.add(current);
}
// Sort descending based on priority. If priorities are equal, compare the class names to
// get a reliable result.
Collections.sort(list, Collections.reverseOrder(new Comparator<T>() {
@Override
public int compare(T f1, T f2) {
int pd = priorityAccessor.getPriority(f1) - priorityAccessor.getPriority(f2);
if (pd != 0) {
return pd;
}
return f1.getClass().getName().compareTo(f2.getClass().getName());
}
}));
return Collections.unmodifiableList(list);
}
2、addService(commonService)
时序图如下:
a、forPort(port)返回NettyServerBuilder。 NettyServerBuilder继承AbstractServerImplBuilder。addService方法在抽象类AbstractServerImplBuilder中实现。调用BindableService的bindService()方法。.proto文件中的service生成的java代码中对应的ServiceImplBase实现了BindableService。
b、抽象类ServiceImplBase中定义了自定义的方法和 bindService()。
AbstractServerImplBuilder
----->>
@Override
public final T addService(BindableService bindableService) {
if (bindableService instanceof InternalNotifyOnServerBuild) {
notifyOnBuildList.add((InternalNotifyOnServerBuild) bindableService);
}
return addService(checkNotNull(bindableService, "bindableService").bindService());
}
-----------------------------------------------------------------
public static abstract class CommonServiceImplBase implements CommonService, io.grpc.BindableService
@java.lang.Override public io.grpc.ServerServiceDefinition bindService() {
return CommonServiceGrpc.bindService(this);
}
------------------------------------------------------------------------
AbstractCommonService
----->>
@java.lang.Deprecated public static io.grpc.ServerServiceDefinition bindService(
final CommonService serviceImpl) {
return io.grpc.ServerServiceDefinition.builder(getServiceDescriptor())
.addMethod(
METHOD_HANDLE,
asyncUnaryCall(
new MethodHandlers<
com.why.rpc.GrpcService.Request,
com.why.rpc.GrpcService.Response>(
serviceImpl, METHODID_HANDLE)))
.build();
}
c、ServiceGrpc中进行bindService(CommonService)
getServiceDescriptor():获取定义方法的对应描述信息,该方法封装了定义的service,method以及对应参数的名称和解析类。返回ServiceDescriptor。
.proto 生成的serviceGrpc中的内部类MethodHandlers,
MethodHandlers包含了方法的请求参数,返回参数对应的service以及对应的methodId。
返回MethodHandlers对象后,调用ServerCalls对应的方法,返回对应的ServerCallHandler的实现类,传入的参数是上一步生成的MethodHandlers,为了适配service中定义的不同的方法。
设置方法的handlers和适配之后,调用ServerServiceDefinition.build()方法。该方法主要是根据serviceName和methodDescriptors生成ServiceDescriptor。
private static class MethodHandlers<Req, Resp> implements
io.grpc.stub.ServerCalls.UnaryMethod<Req, Resp>,
io.grpc.stub.ServerCalls.ServerStreamingMethod<Req, Resp>,
io.grpc.stub.ServerCalls.ClientStreamingMethod<Req, Resp>,
io.grpc.stub.ServerCalls.BidiStreamingMethod<Req, Resp> {
private final CommonService serviceImpl;
private final int methodId;
public MethodHandlers(CommonService serviceImpl, int methodId) {
this.serviceImpl = serviceImpl;
this.methodId = methodId;
}
@java.lang.Override
@java.lang.SuppressWarnings("unchecked")
public void invoke(Req request, io.grpc.stub.StreamObserver<Resp> responseObserver) {
switch (methodId) {
case METHODID_HANDLE:
serviceImpl.handle((com.why.rpc.GrpcService.Request) request,
(io.grpc.stub.StreamObserver<com.why.rpc.GrpcService.Response>) responseObserver);
break;
default:
throw new AssertionError();
}
}
@java.lang.Override
@java.lang.SuppressWarnings("unchecked")
public io.grpc.stub.StreamObserver<Req> invoke(
io.grpc.stub.StreamObserver<Resp> responseObserver) {
switch (methodId) {
default:
throw new AssertionError();
}
}
}
public static io.grpc.ServiceDescriptor getServiceDescriptor() {
return new io.grpc.ServiceDescriptor(SERVICE_NAME,
METHOD_HANDLE);
}
@java.lang.Deprecated public static io.grpc.ServerServiceDefinition bindService(
final CommonService serviceImpl) {
return io.grpc.ServerServiceDefinition.builder(getServiceDescriptor())
.addMethod(
METHOD_HANDLE,
asyncUnaryCall(
new MethodHandlers<
com.why.rpc.GrpcService.Request,
com.why.rpc.GrpcService.Response>(
serviceImpl, METHODID_HANDLE)))
.build();
}
ServerServiceDefinition build方法如下:
/**
* Construct new ServerServiceDefinition.
*/
public ServerServiceDefinition build() {
ServiceDescriptor serviceDescriptor = this.serviceDescriptor;
if (serviceDescriptor == null) {
List<MethodDescriptor<?, ?>> methodDescriptors
= new ArrayList<>(methods.size());
for (ServerMethodDefinition<?, ?> serverMethod : methods.values()) {
methodDescriptors.add(serverMethod.getMethodDescriptor());
}
serviceDescriptor = new ServiceDescriptor(serviceName, methodDescriptors);
}
Map<String, ServerMethodDefinition<?, ?>> tmpMethods = new HashMap<>(methods);
for (MethodDescriptor<?, ?> descriptorMethod : serviceDescriptor.getMethods()) {
ServerMethodDefinition<?, ?> removed = tmpMethods.remove(
descriptorMethod.getFullMethodName());
if (removed == null) {
throw new IllegalStateException(
"No method bound for descriptor entry " + descriptorMethod.getFullMethodName());
}
if (removed.getMethodDescriptor() != descriptorMethod) {
throw new IllegalStateException(
"Bound method for " + descriptorMethod.getFullMethodName()
+ " not same instance as method in service descriptor");
}
}
if (tmpMethods.size() > 0) {
throw new IllegalStateException(
"No entry in descriptor matching bound method "
+ tmpMethods.values().iterator().next().getMethodDescriptor().getFullMethodName());
}
return new ServerServiceDefinition(serviceDescriptor, methods);
}
}
3、build()
经过1,2两步设置了服务端口和serverService的相关配置后,进行server的构建
AbstractServerImplBuilder 中进行server的创建,代码如下:
a、getTracerFactories():创建流量统计的工厂类,默认情况下会创建两个对象CensusStatsModule和CensusTracingModule。
CensusStatsModule:提供一个工厂为记录Census 的stats
CensusTracingModule:提供一个工厂为记录Census 的traces
这个factory最终会在NettyServerHandler的onHeadersRead方法中根据连接传入的method和headers创建StatsTraceContext用例创建NettyServerStream对象
@VisibleForTesting
final List<? extends ServerStreamTracer.Factory> getTracerFactories() {
ArrayList<ServerStreamTracer.Factory> tracerFactories = new ArrayList<>();
if (statsEnabled) {
CensusStatsModule censusStats = censusStatsOverride;
if (censusStats == null) {
censusStats = new CensusStatsModule(
GrpcUtil.STOPWATCH_SUPPLIER, true, recordStartedRpcs, recordFinishedRpcs,
recordRealTimeMetrics);
}
tracerFactories.add(censusStats.getServerTracerFactory());
}
if (tracingEnabled) {
CensusTracingModule censusTracing =
new CensusTracingModule(Tracing.getTracer(),
Tracing.getPropagationComponent().getBinaryFormat());
tracerFactories.add(censusTracing.getServerTracerFactory());
}
tracerFactories.addAll(streamTracerFactories);
tracerFactories.trimToSize();
return Collections.unmodifiableList(tracerFactories);
}
b、buildTransportServers(): 需要子类重写的方法,为服务器传输特定的消息。NettyServerBuilder中根据socketAddress创建nettyServer
NettyServerBuilder中的buildTransportServers 可能会创建两种EventLoopGroup,当epoll能创建成功的时候,会创建io.netty.channel.epoll.EpollServerSocketChannel 的channel以及对应的loopgroup,如果没有找到io.netty.channel.epoll.Epoll类的话,会去创建nio的channel。
根据配置的监听端口创建对应的NettyServer,最后返回NettyServer的集合
NettyServerBuilder
------------->>>
@Override
@CheckReturnValue
protected List<NettyServer> buildTransportServers(
List<? extends ServerStreamTracer.Factory> streamTracerFactories) {
assertEventLoopsAndChannelType();
ProtocolNegotiator negotiator = protocolNegotiator;
if (negotiator == null) {
negotiator = sslContext != null
? ProtocolNegotiators.serverTls(sslContext, this.getExecutorPool())
: ProtocolNegotiators.serverPlaintext();
}
List<NettyServer> transportServers = new ArrayList<>(listenAddresses.size());
for (SocketAddress listenAddress : listenAddresses) {
NettyServer transportServer = new NettyServer(
listenAddress, channelFactory, channelOptions, childChannelOptions,
bossEventLoopGroupPool, workerEventLoopGroupPool, forceHeapBuffer, negotiator,
streamTracerFactories, getTransportTracerFactory(), maxConcurrentCallsPerConnection,
autoFlowControl, flowControlWindow, maxMessageSize, maxHeaderListSize,
keepAliveTimeInNanos, keepAliveTimeoutInNanos,
maxConnectionIdleInNanos, maxConnectionAgeInNanos,
maxConnectionAgeGraceInNanos, permitKeepAliveWithoutCalls, permitKeepAliveTimeInNanos,
getChannelz());
transportServers.add(transportServer);
}
return Collections.unmodifiableList(transportServers);
}
c、ServerImpl 的实例化,主要是设置一些属性
notifyTarget.notifyOnBuild(server):对于实现改接口的service,通知service server已经被创建
@Override
public final Server build() {
ServerImpl server = new ServerImpl(
this,
buildTransportServers(getTracerFactories()),
Context.ROOT);
for (InternalNotifyOnServerBuild notifyTarget : notifyOnBuildList) {
notifyTarget.notifyOnBuild(server);
}
return server;
}
4、start()
经过前面两步准备参数,创建对象,开始启动流程
ServerImpl中的参数transportServers 是build时由NettyServerBuilder 创建的NettyServer的集合,这里分别启动各个server。
ServerImpl: start()进行状态校验,并创建serverListenerImpl对象,方法如下:
/**
* Bind and start the server.
*
* @return {@code this} object
* @throws IllegalStateException if already started
* @throws IOException if unable to bind
*/
@Override
public ServerImpl start() throws IOException {
synchronized (lock) {
checkState(!started, "Already started");
checkState(!shutdown, "Shutting down");
// Start and wait for any ports to actually be bound.
ServerListenerImpl listener = new ServerListenerImpl();
for (InternalServer ts : transportServers) {
ts.start(listener);
activeTransportServers++;
}
executor = Preconditions.checkNotNull(executorPool.getObject(), "executor");
started = true;
return this;
}
}
NettyServer: start(ServerListener listener) ,在该方法中创建netty的ServerBootstrap,并设置对应的参数。这里主要创建了NettyServerTransport和ServerTransportListener。
NettyServerTransport: 主要负责与netty的消息传输,这个维护了netty的channel。在start()方法中创建了GrpcHttp2ConnectionHandler和WriteBufferingAndExceptionHandler加入到channel的pipeline中,对channel的消息进行编解码。
ServerTransportListener: 服务端的观察者,创建流在这个transport 线程进行通知
代码如下:
@Override
public void start(ServerListener serverListener) throws IOException {
listener = checkNotNull(serverListener, "serverListener");
ServerBootstrap b = new ServerBootstrap();
b.option(ALLOCATOR, Utils.getByteBufAllocator(forceHeapBuffer));
b.childOption(ALLOCATOR, Utils.getByteBufAllocator(forceHeapBuffer));
b.group(bossGroup, workerGroup);
b.channelFactory(channelFactory);
// For non-socket based channel, the option will be ignored.
b.childOption(SO_KEEPALIVE, true);
if (channelOptions != null) {
for (Map.Entry<ChannelOption<?>, ?> entry : channelOptions.entrySet()) {
@SuppressWarnings("unchecked")
ChannelOption<Object> key = (ChannelOption<Object>) entry.getKey();
b.option(key, entry.getValue());
}
}
if (childChannelOptions != null) {
for (Map.Entry<ChannelOption<?>, ?> entry : childChannelOptions.entrySet()) {
@SuppressWarnings("unchecked")
ChannelOption<Object> key = (ChannelOption<Object>) entry.getKey();
b.childOption(key, entry.getValue());
}
}
b.childHandler(new ChannelInitializer<Channel>() {
@Override
public void initChannel(Channel ch) {
ChannelPromise channelDone = ch.newPromise();
long maxConnectionAgeInNanos = NettyServer.this.maxConnectionAgeInNanos;
if (maxConnectionAgeInNanos != MAX_CONNECTION_AGE_NANOS_DISABLED) {
// apply a random jitter of +/-10% to max connection age
maxConnectionAgeInNanos =
(long) ((.9D + Math.random() * .2D) * maxConnectionAgeInNanos);
}
NettyServerTransport transport =
new NettyServerTransport(
ch,
channelDone,
protocolNegotiator,
streamTracerFactories,
transportTracerFactory.create(),
maxStreamsPerConnection,
autoFlowControl,
flowControlWindow,
maxMessageSize,
maxHeaderListSize,
keepAliveTimeInNanos,
keepAliveTimeoutInNanos,
maxConnectionIdleInNanos,
maxConnectionAgeInNanos,
maxConnectionAgeGraceInNanos,
permitKeepAliveWithoutCalls,
permitKeepAliveTimeInNanos);
ServerTransportListener transportListener;
// This is to order callbacks on the listener, not to guard access to channel.
synchronized (NettyServer.this) {
if (channel != null && !channel.isOpen()) {
// Server already shutdown.
ch.close();
return;
}
// `channel` shutdown can race with `ch` initialization, so this is only safe to increment
// inside the lock.
sharedResourceReferenceCounter.retain();
transportListener = listener.transportCreated(transport);
}
/**
* Releases the event loop if the channel is "done", possibly due to the channel closing.
*/
final class LoopReleaser implements ChannelFutureListener {
private boolean done;
@Override
public void operationComplete(ChannelFuture future) throws Exception {
if (!done) {
done = true;
sharedResourceReferenceCounter.release();
}
}
}
transport.start(transportListener);
ChannelFutureListener loopReleaser = new LoopReleaser();
channelDone.addListener(loopReleaser);
ch.closeFuture().addListener(loopReleaser);
}
});
// Bind and start to accept incoming connections.
ChannelFuture future = b.bind(address);
// We'd love to observe interruption, but if interrupted we will need to close the channel,
// which itself would need an await() to guarantee the port is not used when the method returns.
// See #6850
future.awaitUninterruptibly();
if (!future.isSuccess()) {
throw new IOException("Failed to bind", future.cause());
}
channel = future.channel();
channel.eventLoop().execute(new Runnable() {
@Override
public void run() {
listenSocketStats = new ListenSocket(channel);
channelz.addListenSocket(listenSocketStats);
}
});
}