I have searched the issues of this repository and believe that this is not a duplicate.
Ⅰ. Issue Description
seata-server 在开发、测试阶段都一切正常,昨天准备部署生产试运行环境时出现的。eureka注册中心,mysql数据库,启动参数及脚本采用默认的(-XX:MaxDirectMemorySize=1024m)。
出现错误后,尝试修改-XX:MaxDirectMemorySize 为2G,3G,4G都会出问题,大概接入了5个左右的微服务就爆。
能否给出一个服务器推荐配置?
Ⅱ. Describe what happened
错误日志:
2020-01-10 10:35:09.728 INFO [NettyServerNIOWorker_1_64]io.seata.core.rpc.netty.RpcServer.exceptionCaught:377 -channel exx:failed to allocate 134217728 byte(s) of direct memory (used: 1073741824, max: 1073741824),channel:[id: 0x5f464337, L:/112.128.4.29:8091 - R:/112.128.4.62:44292]2020-01-10 10:35:09.728 ERROR[NettyServerNIOWorker_1_64]io.seata.core.rpc.netty.AbstractRpcRemoting.exceptionCaught:437 -0318
io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 134217728 byte(s) of direct memory (used: 1073741824, max: 1073741824)
at io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:652)
at io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:606)
at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:764)
at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:740)
at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:244)
at io.netty.buffer.PoolArena.allocate(PoolArena.java:214)
at io.netty.buffer.PoolArena.allocate(PoolArena.java:146)
at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:324)
at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:185)
at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:176)
at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:137)
at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:114)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:147)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:579)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:496)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:897)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:745)
Ⅲ. Describe what you expected to happen
网上都是给的netty内存溢出方案,我怀疑跟JVM内存管理有关,暂时还没有用jps详细跟踪。
Ⅳ. How to reproduce it (as minimally and precisely as possible)
服务器高配(32vCPU,64G内存)
seata-server 1.0.0
5个以上的微服务
Ⅴ. Anything else we need to know?
Ⅵ. Environment:
JDK version : 1.8
OS : centos 7.5
seata-server:1.0.0
服务器配置:
开发测试:4核8G(一切正常,10多个微服务连接也没有问题)
生产试运行:32核64G(5个微服务就报错)
seata-server的配置参数都一致