文章目录
一、简介
RDMA是一种概念,在两个或者多个计算机进行通讯的时候使用DMA, 从一个主机的内存直接访问另一个主机的内存。
DISNI是IBM开源的RDMA库,是 jVerbs库替代,提供了低级的 verbs API和封装好的Endpoint API。在具体的远程内存读写中,RDMA操作用于读写操作的远程虚拟内存地址包含在RDMA消息中传送,远程应用程序要做的只是在其本地网卡中注册相应的内存缓冲区。远程节点的CPU除在连接建立、注册调用等之外,在整个RDMA数据传输过程中并不提供服务,因此没有带来任何负载。
系统框架设计图摘取一位知乎大佬的图:
二、类介绍
RdmaCmid
:RDMA Endpoint的唯一标识,类似于SocketIbvMr
:RDMA注册的内存区域,即RDMA读写操作的缓冲区,包含以下几个部分:- addr:内存区域的地址
- length:内存buffer的长度
- lkey:本地key
- rkey:远程key
IbvSge
:用于描述本地的RDMA内存区域,即用于描述IbvMr
,包含IbvMr的内存地址、长度、以及该IbvMr
的lkey
(我理解的是唯一标识)IbvSendWR
:表示一个发送请求,该请求持有一个IbvSge
列表,每个IbvSge
指向一个IbvMr
,表示要发送的数据缓冲区IbvRecvWR
:表示一个接收请求,即从远端拉取数据,同样包含一个IbvSge
,用于表示存放数据的内存缓冲区IbvWC
:表示一个工作完成事件,应用可以使用pollCQ来查询所有完成的工作请求(包括send和recv)RdmaEndpoint
:表示一个client节点,提供connect()、disconnect()
等方法,链接后,则类似于一个RdmaCmid类,提供postSend()、postRecv()、registerMemory()
等方法,其中postSend()
传入一个List<IbvSendWR>
表示要执行的一系列发送操作,postRecv()
同理,registerMemory()
则需要传入一个ByteBuffer
用于将该字节缓冲区注册到RDMA中,以供RDMA设备使用RdmaServerEndponit
:表示一个server节点,提供bind()、accept()
类似的方法RdmaEndpointGroup
:RdmaEndpoint、RdmaServerEndpoint
的工厂以及容器IbvCQ
:表示一个完成队列,多个IbvQp
可以共享同一个完成队列IbvQp
:表示一个Queue Pair(队列对),包含一个 sendIbvCQ
(SQ)和一个recvIbvCQ
(RQ),分别用于表示发送队列和接收队列IbvPd
:表示一个保护域,用于内存注册
三、基本流程
- 应用程序提交工作请求(WR)到工作队列(WQ,细分为SQ和RQ两种),工作队列中存储的基本元素为WR
- RDMA设备不断从工作队列中取出任务来执行
- 每当一个WR被完成,发布相应的WC,存放在CQ中
- 应用程序从完成队列(CQ)中不断取WC
四、示例
示例摘取自DISNI库的examples包。
示例一:ReadClient & ReadServer
ReadClient
public class ReadClient implements RdmaEndpointFactory<ReadClient.CustomClientEndpoint> {
//创建一个RDMAClient工厂,用于创建CustomClientEndpoint
//相当于Netty的NioEventLoopGroup
private RdmaActiveEndpointGroup<ReadClient.CustomClientEndpoint> endpointGroup;
//服务器的host
private String host;
//服务器的Rdma监听端口
private int port;
//程序入口,通过args将服务端的ip以及port传入
public static void main(String[] args) throws Exception {
ReadClient simpleClient = new ReadClient();
simpleClient.launch(args);
}
//解析args
public void launch(String[] args) throws Exception {
CmdLineCommon cmdLine = new CmdLineCommon("ReadClient");
try {
cmdLine.parse(args);
} catch (ParseException e) {
cmdLine.printHelp();
System.exit(-1);
}
host = cmdLine.getIp();
port = cmdLine.getPort();
//调用run方法启动client
this.run();
}
//运行RDMA客户端
public void run() throws Exception {
//创建EndpointGroup. RdmaActiveEndpointGroup包含IbvCQ processing并且通过endpoint的dispatchCqEvent方法分发CQ event
//cq表示工作队列,Group用于处理所有的工作请求,完成后产生一个完成event,类似于netty的实现,相当于NioLoopGroup
endpointGroup = new RdmaActiveEndpointGroup<ReadClient.CustomClientEndpoint>(1000, false, 128, 4, 128);
//传入一个EndpointFactory用于创建Endpoint,类似于Netty的ChannelFactory
//由于ReadClient实现了EndPointFactory接口,所以传入this
endpointGroup.init(this);
//创建一个EndPoint
ReadClient.CustomClientEndpoint endpoint = endpointGroup.createEndpoint();
//connect to the server
InetAddress ipAddress = InetAddress.getByName(host);
InetSocketAddress address = new InetSocketAddress(ipAddress, port);
//1000为超时时间
endpoint.connect(address, 1000);
InetSocketAddress _addr = (InetSocketAddress) endpoint.getDstAddr();
System.out.println("ReadClient::client connected, address " + _addr.toString());
//in our custom endpoints we make sure CQ events get stored in a queue, we now query that queue for new CQ events.
//in this case a new CQ event means we have received some data, i.e., a message from the server
//此处应该是连接成功后,服务端会将自己data缓冲区的相关信息通过send缓冲区发送过来
endpoint.getWcEvents().take();
ByteBuffer recvBuf = endpoint.getRecvBuf();
//the message has been received in this buffer
//it contains some RDMA information sent by the server
//设置读标志为0,从头开始读缓冲区
recvBuf.clear();
long addr = recvBuf.getLong();
int length = recvBuf.getInt();
int lkey = recvBuf.getInt();
recvBuf.clear();
System.out.println("ReadClient::receiving rdma information, addr " + addr + ", length " + length + ", key " + lkey);
System.out.println("ReadClient::preparing read operation...");
//the RDMA information above identifies a RDMA buffer at the server side
//let's issue a one-sided RDMA read opeation to fetch the content from that buffer
IbvSendWR sendWR = endpoint.getSendWR();
sendWR.setWr_id(1001);
sendWR.setOpcode(IbvSendWR.IBV_WR_RDMA_READ);
sendWR.setSend_flags(IbvSendWR.IBV_SEND_SIGNALED);
sendWR.getRdma().setRemote_addr(addr);
sendWR.getRdma().setRkey(lkey);
//post the operation on the endpoint
//单边操作读取数据,即发起read请求,前面的Opcode指定了read操作
SVCPostSend postSend = endpoint.postSend(endpoint.getWrList_send());
for (int i = 10; i <= 100; ){
//读取第1个WR的第一个Seg的所表示的远程缓冲区的i个字节
postSend.getWrMod(0).getSgeMod(0).setLength(i);
postSend.execute();
//wait until the operation has completed
endpoint.getWcEvents().take();
//we should have the content of the remote buffer in our own local buffer now
ByteBuffer dataBuf = endpoint.getDataBuf();
dataBuf.clear();
System.out.println("ReadClient::read memory from server: " + dataBuf.asCharBuffer().toString());
i += 10;
}
//let's prepare a final message to signal everything went fine
sendWR.setWr_id(1002);
sendWR.setOpcode(IbvSendWR.IBV_WR_SEND);
sendWR.setSend_flags(IbvSendWR.IBV_SEND_SIGNALED);
sendWR.getRdma().setRemote_addr(addr);
sendWR.getRdma().setRkey(lkey);
//post that operation
endpoint.postSend(endpoint.getWrList_send()).execute().free();
//close everything
System.out.println("closing endpoint");
endpoint.close();
System.out.println("closing endpoint, done");
endpointGroup.close();
}
//实现RdmaEndpointFactory接口的方法
public ReadClient.CustomClientEndpoint createEndpoint(RdmaCmId idPriv, boolean serverSide) throws IOException {
return new ReadClient.CustomClientEndpoint(endpointGroup, idPriv, serverSide);
}
//客户端socket
public static class CustomClientEndpoint extends RdmaActiveEndpoint {
//缓冲区,和IbvMr一一对应
private ByteBuffer buffers[];
//RDMA设备操作的缓冲区数组
private IbvMr mrlist[];
//总共3个buffer
private int buffercount = 3;
//每个大小100Byte
private int buffersize = 100;
private ByteBuffer dataBuf;
private IbvMr dataMr;
private ByteBuffer sendBuf;
private ByteBuffer recvBuf;
private IbvMr recvMr;
//发送操作执行列表
private LinkedList<IbvSendWR> wrList_send;
private IbvSge sgeSend;
private LinkedList<IbvSge> sgeList;
private IbvSendWR sendWR;
private LinkedList<IbvRecvWR> wrList_recv;
private IbvSge sgeRecv;
private LinkedList<IbvSge> sgeListRecv;
private IbvRecvWR recvWR;
private ArrayBlockingQueue<IbvWC> wcEvents;
public CustomClientEndpoint(RdmaActiveEndpointGroup<? extends CustomClientEndpoint> endpointGroup, RdmaCmId idPriv, boolean isServerSide) throws IOException {
super(endpointGroup, idPriv, isServerSide);
this.buffercount = 3;
this.buffersize = 100;
buffers = new ByteBuffer[buffercount];
this.mrlist = new IbvMr[buffercount];
for (int i = 0; i < buffercount; i++){
buffers[i] = ByteBuffer.allocateDirect(buffersize);
}
this.wrList_send = new LinkedList<IbvSendWR>();
this.sgeSend = new IbvSge();
this.sgeList = new LinkedList<IbvSge>();
this.sendWR = new IbvSendWR();
this.wrList_recv = new LinkedList<IbvRecvWR>();
this.sgeRecv = new IbvSge();
this.sgeListRecv = new LinkedList<IbvSge>();
this.recvWR = new IbvRecvWR();
this.wcEvents = new ArrayBlockingQueue<IbvWC>(10);
}
//important: we override the init method to prepare some buffers (memory registration, post recv, etc).
//This guarantees that at least one recv operation will be posted at the moment this endpoint is connected.
public void init() throws IOException{
super.init();
for (int i = 0; i < buffercount; i++){
mrlist[i] = registerMemory(buffers[i]).execute().free().getMr();
}
this.dataBuf = buffers[0];
this.dataMr = mrlist[0];
this.sendBuf = buffers[1];
this.recvBuf = buffers[2];
this.recvMr = mrlist[2];
dataBuf.clear();
sendBuf.clear();
sgeSend.setAddr(dataMr.getAddr());
sgeSend.setLength(dataMr.getLength());
sgeSend.setLkey(dataMr.getLkey());
sgeList.add(sgeSend);
sendWR.setWr_id(2000);
sendWR.setSg_list(sgeList);
sendWR.setOpcode(IbvSendWR.IBV_WR_SEND);
sendWR.setSend_flags(IbvSendWR.IBV_SEND_SIGNALED);
wrList_send.add(sendWR);
sgeRecv.setAddr(recvMr.getAddr());
sgeRecv.setLength(recvMr.getLength());
int lkey = recvMr.getLkey();
sgeRecv.setLkey(lkey);
sgeListRecv.add(sgeRecv);
recvWR.setSg_list(sgeListRecv);
recvWR.setWr_id(2001);
wrList_recv.add(recvWR);
System.out.println("ReadClient::initiated recv");
this.postRecv(wrList_recv).execute().free();
}
public void dispatchCqEvent(IbvWC wc) throws IOException {
wcEvents.add(wc);
}
public ArrayBlockingQueue<IbvWC> getWcEvents() {
return wcEvents;
}
public LinkedList<IbvSendWR> getWrList_send() {
return wrList_send;
}
public LinkedList<IbvRecvWR> getWrList_recv() {
return wrList_recv;
}
public ByteBuffer getDataBuf() {
return dataBuf;
}
public ByteBuffer getSendBuf() {
return sendBuf;
}
public ByteBuffer getRecvBuf() {
return recvBuf;
}
public IbvSendWR getSendWR() {
return sendWR;
}
public IbvRecvWR getRecvWR() {
return recvWR;
}
}
}
ReadServer
public class ReadServer implements RdmaEndpointFactory<ReadServer.CustomServerEndpoint> {
//相当于Netty的NioEventLoopGroup
private RdmaActiveEndpointGroup<ReadServer.CustomServerEndpoint> endpointGroup;
private String host;
private int port;
//main函数
public static void main(String[] args) throws Exception {
ReadServer simpleServer = new ReadServer();
simpleServer.launch(args);
}
//解析命令行参数
public void launch(String[] args) throws Exception {
CmdLineCommon cmdLine = new CmdLineCommon("ReadServer");
try {
cmdLine.parse(args);
} catch (ParseException e) {
cmdLine.printHelp();
System.exit(-1);
}
host = cmdLine.getIp();
port = cmdLine.getPort();
this.run();
}
//
public void run() throws Exception {
//同ReadClient,1000为超时时间,false表示不开启poll,128为WR的最大值,4为Seg的最大值,即每个请求最大操作4个内存缓冲区,
//最后一个128为完成队列CQ的长度
endpointGroup = new RdmaActiveEndpointGroup<CustomServerEndpoint>(1000, false, 128, 4, 128);
endpointGroup.init(this);
//create a server endpoint
RdmaServerEndpoint<ReadServer.CustomServerEndpoint> serverEndpoint = endpointGroup.createServerEndpoint();
//we can call bind on a server endpoint, just like we do with sockets
//绑定本地端口
InetAddress ipAddress = InetAddress.getByName(host);
InetSocketAddress address = new InetSocketAddress(ipAddress, port);
serverEndpoint.bind(address, 10);
System.out.println("ReadServer::server bound to address" + address.toString());
//we can accept new connections
//等待连接,阻塞等待
ReadServer.CustomServerEndpoint endpoint = serverEndpoint.accept();
System.out.println("ReadServer::connection accepted ");
//let's prepare a message to be sent to the client
//in the message we include the RDMA information of a local buffer which we allow the client to read using a one-sided RDMA operation
//执行到此处。表示有客户端连接到来,发送数据给客户端,内容为server的RDMA buffer的相关信息
ByteBuffer dataBuf = endpoint.getDataBuf();
ByteBuffer sendBuf = endpoint.getSendBuf();
IbvMr dataMr = endpoint.getDataMr();
dataBuf.asCharBuffer().put("This is a RDMA/read on stag " + dataMr.getLkey() + " !");
dataBuf.clear();
sendBuf.putLong(dataMr.getAddr());
sendBuf.putInt(dataMr.getLength());
sendBuf.putInt(dataMr.getLkey());
sendBuf.clear();
//post the operation to send the message
System.out.println("ReadServer::sending message");
endpoint.postSend(endpoint.getWrList_send()).execute().free();
//we have to wait for the CQ event, only then we know the message has been sent out
endpoint.getWcEvents().take();
//let's wait for the final message to be received. We don't need to check the message itself, just the CQ event is enough.
endpoint.getWcEvents().take();
System.out.println("ReadServer::final message");
//close everything
endpoint.close();
serverEndpoint.close();
endpointGroup.close();
}
public ReadServer.CustomServerEndpoint createEndpoint(RdmaCmId idPriv, boolean serverSide) throws IOException {
return new ReadServer.CustomServerEndpoint(endpointGroup, idPriv, serverSide);
}
public static class CustomServerEndpoint extends RdmaActiveEndpoint {
private ByteBuffer buffers[];
private IbvMr mrlist[]; //RDMA设备操作的内存缓冲区
private int buffercount = 3;
private int buffersize = 100;
private ByteBuffer dataBuf; //ByteBuf与IbvMr一一对应,先申请ByteBuf,然后注册为IbvMr
private IbvMr dataMr;
private ByteBuffer sendBuf;
private IbvMr sendMr;
private ByteBuffer recvBuf;
private IbvMr recvMr;
private LinkedList<IbvSendWR> wrList_send;
private IbvSge sgeSend;
private LinkedList<IbvSge> sgeList;
private IbvSendWR sendWR;
private LinkedList<IbvRecvWR> wrList_recv;
private IbvSge sgeRecv;
private LinkedList<IbvSge> sgeListRecv;
private IbvRecvWR recvWR;
private ArrayBlockingQueue<IbvWC> wcEvents;
public CustomServerEndpoint(RdmaActiveEndpointGroup<CustomServerEndpoint> endpointGroup, RdmaCmId idPriv, boolean serverSide) throws IOException {
super(endpointGroup, idPriv, serverSide);
this.buffercount = 3;
this.buffersize = 100;
buffers = new ByteBuffer[buffercount];
this.mrlist = new IbvMr[buffercount];
for (int i = 0; i < buffercount; i++){
buffers[i] = ByteBuffer.allocateDirect(buffersize);
}
this.wrList_send = new LinkedList<IbvSendWR>();
this.sgeSend = new IbvSge();
this.sgeList = new LinkedList<IbvSge>();
this.sendWR = new IbvSendWR();
this.wrList_recv = new LinkedList<IbvRecvWR>();
this.sgeRecv = new IbvSge();
this.sgeListRecv = new LinkedList<IbvSge>();
this.recvWR = new IbvRecvWR();
this.wcEvents = new ArrayBlockingQueue<IbvWC>(10);
}
//important: we override the init method to prepare some buffers (memory registration, post recv, etc).
//This guarantees that at least one recv operation will be posted at the moment this endpoint is connected.
public void init() throws IOException{
super.init();
for (int i = 0; i < buffercount; i++){
mrlist[i] = registerMemory(buffers[i]).execute().free().getMr();
}
this.dataBuf = buffers[0];
this.dataMr = mrlist[0];
this.sendBuf = buffers[1];
this.sendMr = mrlist[1];
this.recvBuf = buffers[2];
this.recvMr = mrlist[2];
sgeSend.setAddr(sendMr.getAddr());
sgeSend.setLength(sendMr.getLength());
sgeSend.setLkey(sendMr.getLkey());
sgeList.add(sgeSend);
sendWR.setWr_id(2000);
sendWR.setSg_list(sgeList);
sendWR.setOpcode(IbvSendWR.IBV_WR_SEND);
sendWR.setSend_flags(IbvSendWR.IBV_SEND_SIGNALED);
wrList_send.add(sendWR);
sgeRecv.setAddr(recvMr.getAddr());
sgeRecv.setLength(recvMr.getLength());
int lkey = recvMr.getLkey();
sgeRecv.setLkey(lkey);
sgeListRecv.add(sgeRecv);
recvWR.setSg_list(sgeListRecv);
recvWR.setWr_id(2001);
wrList_recv.add(recvWR);
this.postRecv(wrList_recv).execute();
}
public void dispatchCqEvent(IbvWC wc) throws IOException {
wcEvents.add(wc);
}
public ArrayBlockingQueue<IbvWC> getWcEvents() {
return wcEvents;
}
public LinkedList<IbvSendWR> getWrList_send() {
return wrList_send;
}
public LinkedList<IbvRecvWR> getWrList_recv() {
return wrList_recv;
}
public ByteBuffer getDataBuf() {
return dataBuf;
}
public ByteBuffer getSendBuf() {
return sendBuf;
}
public ByteBuffer getRecvBuf() {
return recvBuf;
}
public IbvSendWR getSendWR() {
return sendWR;
}
public IbvRecvWR getRecvWR() {
return recvWR;
}
public IbvMr getDataMr() {
return dataMr;
}
public IbvMr getSendMr() {
return sendMr;
}
public IbvMr getRecvMr() {
return recvMr;
}
}
}
整体逻辑
示例总体逻辑比较简单:
- server端创建一个ServerEndpoint,准备3个缓冲区,分别表示数据缓冲区,send缓冲区,recv缓冲区,data缓冲区存放客户端要读取的服务端内容。
- 然后
bind()
,等待客户端连接 - 当
accept()
客户端后,将一些数据放入data缓冲区,并将data缓冲区的长度、地址、lkey作为send缓冲区的内容发送给客户端 - 客户端接收后,将数据读取出来并打印
- 然后将收到的服务端data缓冲区的lkey、addr设置到sendWR中,进行单边读取
单边操作:read和write是单边操作,send、recv是双边操作。单边操作只需要明确信息的源和目的地址,远端应用不需要感受到这次通信,所以当客户端进行单边操作时,服务端程序是不会被感应到的,所以在上例中,服务端在创建连接后,将data缓冲区的lkey以及地址发送给了客户端,然后客户端自己进行单边操作来进行数据的读取。具体流程如下:
- 首先客户端©、服务端(S)建立连接,初始化好两端的QP
- 数据保存在S的data buffer中(该buffer需要提前注册到RDMA设备上),然后S将该buffer地址以及lkey发送到C端,相当于S将该buffer的操作权授予了C
- 然后C在收到buffer的地址以及lkey后,RDMA设备会将其封装,发起read请求,读取S对应buffer中的数据
示例二:SendRecvClient & SendRecvServer
示例一只进行了单边操作,此示例进行了双边操作,即send、recv请求
SendRecvClient:
client代码只有run方法不同于示例一,所以只粘贴run方法。server同理
public void run() throws Exception {
//create a EndpointGroup. The RdmaActiveEndpointGroup contains CQ processing and delivers CQ event to the endpoint.dispatchCqEvent() method.
endpointGroup = new RdmaActiveEndpointGroup<SendRecvClient.CustomClientEndpoint>(1000, false, 128, 4, 128);
endpointGroup.init(this);
//we have passed our own endpoint factory to the group, therefore new endpoints will be of type CustomClientEndpoint
//let's create a new client endpoint
SendRecvClient.CustomClientEndpoint endpoint = endpointGroup.createEndpoint();
//connect to the server
InetAddress ipAddress = InetAddress.getByName(host);
InetSocketAddress address = new InetSocketAddress(ipAddress, port);
endpoint.connect(address, 1000);
System.out.println("SimpleClient::client channel set up ");
//向服务端发送消息,hello from the client
ByteBuffer sendBuf = endpoint.getSendBuf();
sendBuf.asCharBuffer().put("Hello from the client");
sendBuf.clear();
SVCPostSend postSend = endpoint.postSend(endpoint.getWrList_send());
//设置第一个WR的id为4444
postSend.getWrMod(0).setWr_id(4444);
postSend.execute().free();
//发送数据
IbvWC wc = endpoint.getWcEvents().take();
System.out.println("SimpleClient::message sent, wr_id " + wc.getWr_id());
//in this case a new CQ event means we have received data
//接收server回复的数据
endpoint.getWcEvents().take();
System.out.println("SimpleClient::message received");
//the response should be received in this buffer, let's print it
ByteBuffer recvBuf = endpoint.getRecvBuf();
recvBuf.clear();
System.out.println("Message from the server: " + recvBuf.asCharBuffer().toString());
//close everything
endpoint.close();
System.out.println("endpoint closed");
endpointGroup.close();
System.out.println("group closed");
// System.exit(0);
}
SendRecvServer:
服务端逻辑和客户端一模一样,在此不再赘述
public void run() throws Exception {
//create a EndpointGroup. The RdmaActiveEndpointGroup contains CQ processing and delivers CQ event to the endpoint.dispatchCqEvent() method.
endpointGroup = new RdmaActiveEndpointGroup<SendRecvServer.CustomServerEndpoint>(1000, false, 128, 4, 128);
endpointGroup.init(this);
//create a server endpoint
RdmaServerEndpoint<SendRecvServer.CustomServerEndpoint> serverEndpoint = endpointGroup.createServerEndpoint();
//we can call bind on a server endpoint, just like we do with sockets
InetAddress ipAddress = InetAddress.getByName(host);
InetSocketAddress address = new InetSocketAddress(ipAddress, port);
serverEndpoint.bind(address, 10);
System.out.println("SimpleServer::servers bound to address " + address.toString());
//we can accept new connections
SendRecvServer.CustomServerEndpoint clientEndpoint = serverEndpoint.accept();
//we have previously passed our own endpoint factory to the group, therefore new endpoints will be of type CustomServerEndpoint
System.out.println("SimpleServer::client connection accepted");
//in our custom endpoints we have prepared (memory registration and work request creation) some memory buffers beforehand.
ByteBuffer sendBuf = clientEndpoint.getSendBuf();
sendBuf.asCharBuffer().put("Hello from the server");
sendBuf.clear();
//in our custom endpoints we make sure CQ events get stored in a queue, we now query that queue for new CQ events.
//in this case a new CQ event means we have received data, i.e., a message from the client.
clientEndpoint.getWcEvents().take();
System.out.println("SimpleServer::message received");
ByteBuffer recvBuf = clientEndpoint.getRecvBuf();
recvBuf.clear();
System.out.println("Message from the client: " + recvBuf.asCharBuffer().toString());
//let's respond with a message
clientEndpoint.postSend(clientEndpoint.getWrList_send()).execute().free();
//when receiving the CQ event we know the message has been sent
clientEndpoint.getWcEvents().take();
System.out.println("SimpleServer::message sent");
//close everything
clientEndpoint.close();
System.out.println("client endpoint closed");
serverEndpoint.close();
System.out.println("server endpoint closed");
endpointGroup.close();
System.out.println("group closed");
// System.exit(0);
}