一个网络服务需要包含客户端和服务器,先采用Netty框架创建一个服务器端,并监听本地9000端口。
private final int port = 9000;
private ChannelFuture channelFuture;
private NioEventLoopGroup group;
/**
* 启动服务器
* @throws InterruptedException
*/
public void start() throws InterruptedException {
//线程组
group = new NioEventLoopGroup();
//服务器
ServerBootstrap bootstrap = new ServerBootstrap();
bootstrap.group(group)
//设置非阻塞的服务器Socket连接
.channel(NioServerSocketChannel.class)
//绑定端口
.localAddress(new InetSocketAddress(port))
.childHandler(new ChannelInitializer<SocketChannel>() {
@Override
protected void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new ServerInChannelAdapter());
}
});
//绑定端口,并获取Channel
channelFuture = bootstrap.bind().sync();
}
/**
* 关闭服务器
* @throws InterruptedException
*/
public void stop() throws InterruptedException {
try {
channelFuture.channel().close().sync();
}finally {
//关闭线程池
group.shutdownGracefully().sync();
}
}
当服务器端有客户端连接进来的时候,会调用 pipeline 中的所有handler,所以我们还需要写一个处理客户端请求的处理器。
@Slf4j
public class ServerInChannelAdapter extends ChannelInboundHandlerAdapter {
@Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
log.debug("有新的客户端接入 -> {}",ctx.channel().id());
}
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
ByteBuf in = (ByteBuf) msg;
log.debug("服务器收到消息:{}",in.toString(CharsetUtil.UTF_8));
//读取完成后需要手动释放
ReferenceCountUtil.release(in);
}
@Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
log.debug("服务器读取完成");
//发送数据
ctx.writeAndFlush(Unpooled.copiedBuffer("这里是服务器收到请回复",CharsetUtil.UTF_8));
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
log.error("服务器异常",cause);
//关闭连接
ctx.close();
}
}
服务器端数据处理已经完成,接下来,我们开始写客户端。
首先先编写Netty启动器,启动客户端连接
public void start(){
NioEventLoopGroup group = new NioEventLoopGroup();
Bootstrap bootstrap = new Bootstrap();
bootstrap.group(group)
.channel(NioSocketChannel.class)
.remoteAddress(new InetSocketAddress("127.0.0.1",9000))
.handler(new ChannelInitializer<SocketChannel>() {
@Override
protected void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new ClientTcpInChannel());
}
});
bootstrap.connect();
}
同样客户端也需要handler来处理连接的数据
@Slf4j
public class ClientTcpInChannel extends SimpleChannelInboundHandler<ByteBuf> {
@Override
protected void channelRead0(ChannelHandlerContext ctx, ByteBuf msg) throws Exception {
log.debug("客户端收到:->{}",msg.toString(CharsetUtil.UTF_8));
}
@Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
log.debug("已连接服务器");
ctx.writeAndFlush(Unpooled.copiedBuffer("这里是客户端,收到请回复",CharsetUtil.UTF_8));
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
log.error("客户端异常",cause);
ctx.close();
}
}
然后先执行服务器代码,启动成功后,再执行客户端代码
服务器端日志:
09:27:58.545 [nioEventLoopGroup-2-2] DEBUG com.github.xuejike.javap2p.server.channel.ServerInChannelAdapter - 有新的客户端接入 -> 038c250d
09:27:58.557 [nioEventLoopGroup-2-2] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxCapacityPerThread: 4096
09:27:58.557 [nioEventLoopGroup-2-2] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxSharedCapacityFactor: 2
09:27:58.557 [nioEventLoopGroup-2-2] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.linkCapacity: 16
09:27:58.557 [nioEventLoopGroup-2-2] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.ratio: 8
09:27:58.566 [nioEventLoopGroup-2-2] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.checkAccessible: true
09:27:58.567 [nioEventLoopGroup-2-2] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.checkBounds: true
09:27:58.568 [nioEventLoopGroup-2-2] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector@8d12c2d
09:27:58.573 [nioEventLoopGroup-2-2] DEBUG com.github.xuejike.javap2p.server.channel.ServerInChannelAdapter - 服务器收到消息:这里是客户端,收到请回复
09:27:58.573 [nioEventLoopGroup-2-2] DEBUG com.github.xuejike.javap2p.server.channel.ServerInChannelAdapter - 服务器读取完成
客户端日志:
09:27:58.493 [nioEventLoopGroup-2-1] DEBUG com.github.xuejike.javap2p.client.channel.ClientTcpInChannel - 已连接服务器
09:27:58.508 [nioEventLoopGroup-2-1] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.checkAccessible: true
09:27:58.508 [nioEventLoopGroup-2-1] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.checkBounds: true
09:27:58.509 [nioEventLoopGroup-2-1] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector@e8aa1e5
09:27:58.514 [nioEventLoopGroup-2-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxCapacityPerThread: 4096
09:27:58.515 [nioEventLoopGroup-2-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxSharedCapacityFactor: 2
09:27:58.515 [nioEventLoopGroup-2-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.linkCapacity: 16
09:27:58.515 [nioEventLoopGroup-2-1] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.ratio: 8
09:27:58.592 [nioEventLoopGroup-2-1] DEBUG com.github.xuejike.javap2p.client.channel.ClientTcpInChannel - 客户端收到:->这里是服务器收到请回复