参考:
RocketMQ 安装详细说明
https://blog.csdn.net/wangmx1993328/article/details/81536168#%E6%96%87%E4%BB%B6%E4%B8%8A%E4%BC%A0
RocketMQ
https://blog.csdn.net/mr_evanchen/article/details/80584886 启动(slave之后的结果,在家里电脑出现的情况相同)
目录
进入到 distribution/target/apache-rocketmq 目录
https://blog.csdn.net/louniuous/article/details/80220559 通过Notepad++的NppFTP插件连接linux操作系统,并进行shell脚本的编辑
git安装
# yum -y install git
# git --version
解压Rocketmq(二进制的不需要编译)
二进制文件
rocketmq-all-4.3.0-bin-release
unzip -n rocketmq-all-4.3.0-source-release.zip -d /tmp
unzip rocketmq-all-4.2.0-source-release.zip
mv rocketmq-all-4.3.0 /opt/ #移动解压后的文件
cd /opt/
cd rocketmq-all-4.3.0/
编译
mvn -Prelease-all -DskipTests clean install -U
编译时间较长,如果报错,重新执行编译命令
进入到 distribution/target/apache-rocketmq 目录
RocketMQ 操作
编译完成后,调整配置
# cd distribution/target/apache-rocketmq #编译后的程序其实就在’apache-rocketmq‘中,这个目录可以单独拿出来启动
# cd /opt/rocketmq-all-4.2.0/distribution/target/
# cp -a apache-rocketmq /opt/rocketmq
# cd /opt/rocketmq#########调整启动内存#######
# vim bin/runserver.sh #调整nameserver启动的内存(如果服务器内存比较大,可以不调整),如果内存较小,不调整此文件,可能导致无法启动
JAVA_OPT="${JAVA_OPT} -server -Xms256m -Xmx256m -Xmn512m -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m" #需要调整标红部分
# vim bin/runbroker.sh
JAVA_OPT="${JAVA_OPT} -server -Xms256m -Xmx256m -Xmn128m"
########调整日志文件位置########
# vim conf/logback_broker.xml
# vim conf/logback_filtersrv.xml
# vim conf/logback_namesrv.xml
# vim conf/logback_tools.xml
以上四个文件,将类似于[${user.home}/logs/rocketmqlogs/namesrv_default.log]修改为自定义的日志路径,如[/data/logs/rocketmqlogs/namesrv_default.log],如果不修改,日志就会写入到用户目录下的logs目录下
修改配置详细
runserver.sh 文件中 修改 JVM 配置下的第一行,将原来 4g 调小一点
#===========================================================================================
# JVM Configuration
#===========================================================================================
#JAVA_OPT="${JAVA_OPT} -server -Xms4g -Xmx4g -Xmn2g -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m"
JAVA_OPT="${JAVA_OPT} -server -Xms256m -Xmx256m -Xmn125m -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=320m"
JAVA_OPT="${JAVA_OPT} -XX:+UseConcMarkSweepGC -XX:+UseCMSCompactAtFullCollection -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:SoftRefLRUPolicyMSPerMB=0 -XX:+CMSClassUnloadingEnabled -XX:SurvivorRatio=8 -XX:-UseParNewGC"
JAVA_OPT="${JAVA_OPT} -verbose:gc -Xloggc:/dev/shm/rmq_srv_gc.log -XX:+PrintGCDetails"
JAVA_OPT="${JAVA_OPT} -XX:-OmitStackTraceInFastThrow"
JAVA_OPT="${JAVA_OPT} -XX:-UseLargePages"
JAVA_OPT="${JAVA_OPT} -Djava.ext.dirs=${JAVA_HOME}/jre/lib/ext:${BASE_DIR}/lib"
#JAVA_OPT="${JAVA_OPT} -Xdebug -Xrunjdwp:transport=dt_socket,address=9555,server=y,suspend=n"
JAVA_OPT="${JAVA_OPT} ${JAVA_OPT_EXT}"
JAVA_OPT="${JAVA_OPT} -cp ${CLASSPATH}"
$JAVA ${JAVA_OPT} $@
runbroker.sh 文件中 修改 JVM 配置下的第一行,将原来 8g 调小一点
#===========================================================================================
# JVM Configuration
#===========================================================================================
#JAVA_OPT="${JAVA_OPT} -server -Xms8g -Xmx8g -Xmn4g"
JAVA_OPT="${JAVA_OPT} -server -Xms256m -Xmx256m -Xmn128m"
JAVA_OPT="${JAVA_OPT} -XX:+UseG1GC -XX:G1HeapRegionSize=16m -XX:G1ReservePercent=25 -XX:InitiatingHeapOccupancyPercent=30 -XX:SoftRefLRUPolicyMSPerMB=0 -XX:SurvivorRatio=8"
JAVA_OPT="${JAVA_OPT} -verbose:gc -Xloggc:/dev/shm/mq_gc_%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCApplicationStoppedTime -XX:+PrintAdaptiveSizePolicy"
JAVA_OPT="${JAVA_OPT} -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=30m"
JAVA_OPT="${JAVA_OPT} -XX:-OmitStackTraceInFastThrow"
JAVA_OPT="${JAVA_OPT} -XX:+AlwaysPreTouch"
JAVA_OPT="${JAVA_OPT} -XX:MaxDirectMemorySize=15g"
JAVA_OPT="${JAVA_OPT} -XX:-UseLargePages -XX:-UseBiasedLocking"
JAVA_OPT="${JAVA_OPT} -Djava.ext.dirs=${JAVA_HOME}/jre/lib/ext:${BASE_DIR}/lib"
#JAVA_OPT="${JAVA_OPT} -Xdebug -Xrunjdwp:transport=dt_socket,address=9555,server=y,suspend=n"
JAVA_OPT="${JAVA_OPT} ${JAVA_OPT_EXT}"
JAVA_OPT="${JAVA_OPT} -cp ${CLASSPATH}"
启动 NameServer
# nohup sh bin/mqnamesrv & #启动nameserver
# tail -f nohup.out
The Name Server boot success #输出此类信息,说明启动成功
# nohup sh bin/mqbroker -n localhost:9876 autoCreateTopicEnable=true & #启动broker
# tail -f ~/logs/rocketmqlogs/broker.log
模拟发消息
export NAMESRV_ADDR=localhost:9876
sh bin/tools.sh org.apache.rocketmq.example.quickstart.Producer
02:18:03.971 [main] DEBUG i.n.u.i.l.InternalLoggerFactory - Using SLF4J as the default logging framework
02:18:03.977 [main] DEBUG i.n.c.MultithreadEventLoopGroup - -Dio.netty.eventLoopThreads: 4
02:18:03.988 [main] DEBUG i.n.util.internal.PlatformDependent0 - java.nio.Buffer.address: available
02:18:03.988 [main] DEBUG i.n.util.internal.PlatformDependent0 - sun.misc.Unsafe.theUnsafe: available
02:18:03.989 [main] DEBUG i.n.util.internal.PlatformDependent0 - sun.misc.Unsafe.copyMemory: available
02:18:03.989 [main] DEBUG i.n.util.internal.PlatformDependent0 - java.nio.Bits.unaligned: true
02:18:03.990 [main] DEBUG i.n.util.internal.PlatformDependent - Java version: 8
02:18:03.990 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.noUnsafe: false
02:18:03.990 [main] DEBUG i.n.util.internal.PlatformDependent - sun.misc.Unsafe: available
02:18:03.990 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.noJavassist: false
02:18:04.093 [main] DEBUG i.n.util.internal.PlatformDependent - Javassist: available
02:18:04.093 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.tmpdir: /tmp (java.io.tmpdir)
02:18:04.093 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.bitMode: 64 (sun.arch.data.model)
02:18:04.093 [main] DEBUG i.n.util.internal.PlatformDependent - -Dio.netty.noPreferDirect: false
02:18:04.111 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.noKeySetOptimization: false
02:18:04.112 [main] DEBUG io.netty.channel.nio.NioEventLoop - -Dio.netty.selectorAutoRebuildThreshold: 512
02:18:04.167 [MQClientFactoryScheduledThread] DEBUG i.n.util.internal.ThreadLocalRandom - -Dio.netty.initialSeedUniquifier: 0x8e19777279445df9 (took 7 ms)
02:18:04.200 [MQClientFactoryScheduledThread] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.allocator.type: unpooled
02:18:04.201 [MQClientFactoryScheduledThread] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.threadLocalDirectBufferSize: 65536
02:18:04.201 [MQClientFactoryScheduledThread] DEBUG io.netty.buffer.ByteBufUtil - -Dio.netty.maxThreadLocalCharBufferSize: 16384
02:18:04.226 [NettyClientSelector_1] DEBUG i.n.u.i.JavassistTypeParameterMatcherGenerator - Generated: io.netty.util.internal.__matchers__.org.apache.rocketmq.remoting.protocol.RemotingCommandMatcher
02:18:04.240 [MQClientFactoryScheduledThread] DEBUG io.netty.util.Recycler - -Dio.netty.recycler.maxCapacity.default: 262144
02:18:04.245 [NettyClientWorkerThread_1] DEBUG io.netty.buffer.AbstractByteBuf - -Dio.netty.buffer.bytebuf.checkAccessible: true
02:18:04.246 [NettyClientWorkerThread_1] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.level: simple
02:18:04.246 [NettyClientWorkerThread_1] DEBUG io.netty.util.ResourceLeakDetector - -Dio.netty.leakDetection.maxRecords: 4
02:18:04.323 [NettyClientSelector_1] DEBUG io.netty.util.internal.Cleaner0 - java.nio.ByteBuffer.cleaner(): available
SendResult [sendStatus=SEND_OK, msgId=C0A8C5880C683D4EAC69437114DC0000, offsetMsgId=C0A8C58800002A9F0000000000008E4E, messageQueue=MessageQueue [topic=TopicTest, brokerName=localhost.localdomain, queueId=3], queueOffset=51]
SendResult [sendStatus=SEND_OK, msgId=C0A8C5880C683D4EAC694371153D0001, offsetMsgId=C0A8C58800002A9F0000000000008F00, messageQueue=MessageQueue [topic=TopicTest, brokerName=localhost.localdomain, queueId=0], queueOffset=51]
SendResult [sendStatus=SEND_OK, msgId=C0A8C5880C683D4EAC69437115570002, offsetMsgId=C0A8C58800002A9F0000000000008FB2, messageQueue=MessageQueue [topic=TopicTest, brokerName=localhost.localdomain, queueId=1], queueOffset=50]
SendResult [sendStatus=SEND_OK, msgId=C0A8C5880C683D4EAC694371155A0003, offsetMsgId=C0A8C58800002A9F0000000000009064, messageQueue=MessageQueue [topic=TopicTest, brokerName=localhost.localdomain, queueId=2], queueOffset=51]
SendResult [sendStatus=SEND_OK, msgId=C0A8C5880C683D4EAC694371155D0004, offsetMsgId=C0A8C58800002A9F0000000000009116, messageQueue=MessageQueue [topic=TopicTest, brokerName=localhost.localdomain, queueId=3], queueOffset=52]
---------------------------------------------------------------
nohup sh mqbroker -n localhost:9876 -c ./conf/2m-noslave/broker-a.properties &
双master模式
模拟收消息
export NAMESRV_ADDR=localhost:9876
sh bin/tools.sh org.apache.rocketmq.example.quickstart.Consumer
容易出现的错误:重复启动,端口被占用。
启动namesrv
查看日志
查看某个进程
ps aux | grep namesrv
netstat -noat 查看当前运行中的端口
netstat -tunlp|grep 端口号 查看端口号被哪个进程调用
解决
修改一下/etc/profile 增加 export NAMESRV_ADDR=localhost:9876
然后重启一下profile : source /etc/profile
关掉一下mqnames 和mqbrokers 在启动
启动mqnamesrv
nohup sh bin/mqnamesrv &
启动broker时。。
nohup sh bin/mqbroker -n localhost:9876 autoCreateTopicEnable=true &
报错:Lock failed,MQ already started
nohup sh bin/mqbroker -n localhost:9876 autoCreateTopicEnable=true > ~/logs/rocketmqlogs/broker.log 2>&1 &
就可以了。
find /. -name 'logs' -type d #模糊查找文件夹
springboot+websocket+netty+Mq+实现聊天系统服务端
D:\wsy\下载\springboot+websocket+netty+Mq+实现聊天系统服务端
https://segmentfault.com/a/1190000020220432
发消息的错误(家里)