Kafka 学习

.zookeeper

1.1.安装配置

http://coolxing.iteye.com/blog/1871009
https://www.cnblogs.com/xiohao/p/5541093.html
http://blog.51cto.com/tianshili/1762662
http://zookeeper.apache.org/doc/current/zookeeperStarted.html

解压缩后配制
进入conf目录
#cp zoo_sample.cfg zoo.cfg
#vi zoo.cfg
修改dataDir=/data/zookeeper
server.1=127.0.0.1:2888:3888
server.2=127.0.0.2:2888:3888
server.3=127.0.0.3:2888:3888
分别在不同机器上创建myid文件
echo 1 > /data/zookeeper/myid
echo 2 > /data/zookeeper/myid
echo 3 > /data/zookeeper/myid
修改日志文件路径
修改log4l.properties文件
zookeeper.log.dir=/data/zookeeper/log
修改bin/zkEnv.sh
if [ “x${ZOO_LOG_DIR}” = “x” ]
then
#日志输出路径 不需mkdir zookeeper启动时自动创建
ZOO_LOG_DIR="/data/zookeeper/log"
fi

启动
bin/zkServer.sh start
Shell(需先安装java)
bin/zkCli.sh

#vi /usr/lib/systemd/system/zookeeper.service
[Unit]

Description=Startup script for the ZooKeeper daemon

Documentation=http://zookeeper.apache.org/

After=network.target remote-fs.target nss-lookup.target

[Service]

Type=forking

ExecStart=/opt/zookeeper-3.4.11/bin/zkServer.sh start

ExecReload=/bin/kill -HUP $MAINPID

ExecStop=/opt/zookeeper-3.4.11/bin/zkServer.sh stop

PrivateTmp=true

[Install]

WantedBy=multi-user.target

http://blog.51cto.com/tianshili/1762662
为了保证zookeeper各节点能互相通信,应开放3888端口(见配置文件)

1.2.常见错误

Opening socket connection to server XXX.XXX.XXX/XXX.XXX.XXX:2181 Will not attempt to authenticate using SASL (unknown error)
原因1:使用的zookeeper版本和服务器安装版本不一致
原因2:网络问题。如果出现,可试着在客户端应用的/etc/hosts上加入相应的服务器。
原因3:
https://stackoverflow.com/questions/28109669/zookeeper-unable-to-open-socket-to-localhost-000000012181
将/etc/hosts第二行改为
::1 ip6-localhost ip6-localhost.localdomain localhost6 localhost6.localdomain6

1.3.常用命令

http://blog.csdn.net/xiaolang85/article/details/13021339

zkCli.sh -server 127.0.0.1:2181

1. 可以通过命令:echo stat|nc 127.0.0.1 2181 来查看哪个节点被选择作为follower或者leader
2. 使用echo ruok|nc 127.0.0.1 2181 测试是否启动了该Server,若回复imok表示已经启动。
3. echo dump| nc 127.0.0.1 2181 ,列出未经处理的会话和临时节点。
4. echo kill | nc 127.0.0.1 2181 ,关掉server
5. echo conf | nc 127.0.0.1 2181 ,输出相关服务配置的详细信息。
6. echo cons | nc 127.0.0.1 2181 ,列出所有连接到服务器的客户端的完全的连接 / 会话的详细信息。
7. echo envi |nc 127.0.0.1 2181 ,输出关于服务环境的详细信息(区别于 conf 命令)。
8. echo reqs | nc 127.0.0.1 2181 ,列出未经处理的请求。
9. echo wchs | nc 127.0.0.1 2181 ,列出服务器 watch 的详细信息。
10. echo wchc | nc 127.0.0.1 2181 ,通过 session 列出服务器 watch 的详细信息,它的输出是一个与 watch 相关的会话的列表。
11. echo wchp | nc 127.0.0.1 2181 ,通过路径列出服务器 watch 的详细信息。它输出一个与 session 相关的路径。

2.kafka

2.1.安装配置

http://kafka.apache.org/quickstart
http://kafka.apachecn.org/documentation.html

curl http://mirrors.hust.edu.cn/apache/kafka/2.0.0/kafka_2.12-2.0.0.tgz -O

tar -xzf kafka_2.11-2.0.0.tgz

修改config/server.properties

zookeeper.connect=127.0.0.1:2181,127.0.0.2:2181,127.0.0.3:2181

启动kafka
nohup bin/kafka-server-start.sh config/server.properties 1>/dev/null 2>&1 &
停止kafka
bin/kafka-server-stop.sh

Kafka的集群就是起多个borker,注意每个borker.id不能一样。如果是在同一个机器上,使用不用的配置文件,其中id,端口号,log目录不能一样。
config/server-1.properties:
broker.id=1
listeners=PLAINTEXT://:9093
log.dir=/tmp/kafka-logs-1
config/server-2.properties:
broker.id=2
listeners=PLAINTEXT://:9094
log.dir=/tmp/kafka-logs-2

Kafka分区数影响
https://www.cnblogs.com/fanguangdexiaoyuer/p/6066820.html

num.partitions 设置topic默认的分区数为3(默认是1)
default.replication.factor=3 设置topic默认的复制因子为3(默认是1)
delegation.token.max.lifetime.ms 设置为86400000 默认为为(604800000,7天)

2.2.注意事项

如果报异常Caused by: java.net.UnknownHostException: test-RTDB-RD-SVR02: unknown error
修改/etc/hosts文件 将本机ip 主机名 置于127.0.0.1之上(第一行)即可

127.0.0.1 lvs-master test-RTDB-RD-SVR02
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

config/server.properties默认使用机器名,这在域名解析不畅时,很麻烦,手动改成使用ip
listeners=PLAINTEXT://127.0.0.1:9092

2.2.1.Exiting because log truncation is not allowed for partition

启动时出现
[2018-11-27 13:43:27,506] ERROR [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Exiting because log truncation is not allowed for partition aqjk-1-1, current leader’s latest offset 0 is less than replica’s latest offset 3080 (kafka.server.ReplicaFetcherThread)
[2018-11-27 13:43:27,513] INFO [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Stopped (kafka.server.ReplicaFetcherThread)
[2018-11-27 13:43:27,520] ERROR [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Exiting because log truncation is not allowed for partition aqjk-1-0, current leader’s latest offset 0 is less than replica’s latest offset 3056 (kafka.server.ReplicaFetcherThread)
导致kafka不能正常启动

参见:
https://www.jianshu.com/p/d2cbaae38014
不允许脏主选举导致Broker被强制关闭
参见:
http://kafka.apache.org/documentation/#configuration

Notable changes in 0.11.0.0
Unclean leader election is now disabled by default. The new default favors durability over availability. Users who wish to to retain the previous behavior should set the broker config unclean.leader.election.enable to true.

2.3.Replication tools之controlled shutdown

https://blog.csdn.net/damacheng/article/details/84611808

bin/kafka-run-class.sh kafka.admin.ShutdownBroker --zookeeper localhost:12913/kafka --broker #brokerId# --num.retries 3 --retry.interval.ms 60

3.日志清理

Crontab -e
0 0 * * * find /opt/kafka_2.12-2.0.0/logs/.log. -mtime +7 -delete

4.常用命令

4.1.Kafka-topics

创建topic
//for 高优先级传输
bin/kafka-topics.sh --create --zookeeper 127.0.0.77:2181,127.0.0.99:2181,127.0.0.93:2181 --replication-factor 2 --partitions 3 --topic aqjk-1
//for 低优先级传输
bin/kafka-topics.sh --create --zookeeper 127.0.0.77:2181,127.0.0.99:2181,127.0.0.93:2181 --replication-factor 2 --partitions 3 --topic aqjk-2
//for 回调 不需要主动创建
checker-001(由配置min.id决定,key中存放mineId)

查看topic列表
bin/kafka-topics.sh --list --zookeeper 127.0.0.77:2181,127.0.0.99:2181,127.0.0.93:2181

查看topic
http://blog.51cto.com/10120275/1865479

bin/kafka-topics.sh --describe --zookeeper 127.0.0.1:2181,127.0.0.99:2181,121.1.1.93:2181 --topic aqjk-1
修改topic
bin/kafka-topics.sh --alter --zookeeper 127.0.0.77:2181,127.0.0…99:2181,127.0.0…93:2181 --topic test --partitions 3

删除topic
bin/kafka-topics.sh --delete --zookeeper 127.0.0.77:2181,127.0.0.99:2181,127.0.0.93:2181 --topic test

4.2.Kafka-console-producer

发送消息
bin/kafka-console-producer.sh --broker-list 127.0.0.77:9092,127.0.0.99:9092,127.0.0.93:9092 --topic test --partitions 1

4.3.Kafka-console-consumer

消费消息
bin/kafka-console-consumer.sh --bootstrap-server 127.0.0…184:9092,127.0.0…185:9092,127.0.0…186:9092 --topic aqjk-2 --group test1 --from-beginning

4.4.Kafka-consumer-groups

列出组
bin/kafka-consumer-groups.sh --bootstrap-server 127.0.0.77:9092,127.0.0.99:9092,127.0.0.93:9092 --list
查看组
bin/kafka-consumer-groups.sh --bootstrap-server 127.0.0.77:9092,127.0.0.99:9092,127.0.0.93:9092 --describe --group server

4.5.kafka-run-class.sh

https://www.zhihu.com/question/48611929/answer/215255700

bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 127.0.0.184:9092,127.0.0.185:9092,127.0.0.186:9092 --topic aqjk-2 --time -1

-1表示查询test各个分区当前最大的消息位移值(注意,这里的位移不只是consumer端的位移,而是指消息在每个分区的位置)
-2表示去获取当前各个分区的最小位移。之后把运行第一条命令的结果与刚刚获取的位移之和相减就是集群中该topic的当前消息总数

bin/kafka-console-consumer.sh --bootstrap-server 127.0.0.184:9092,127.0.0.185:9092,127.0.0.186:9092 --topic aqjk-2 --group server

bin/kafka-console-consumer.sh --bootstrap-server 127.0.0.184:9092,127.0.0.185:9092,127.0.0.186:9092 --topic aqjk-1 --partition 1 --group server --skip-message-on-error

bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 127.0.0.184:9092,127.0.0.185:9092,127.0.0.186:9092 --topic aqjk-1 --time -1

bin/kafka-consumer-groups.sh --bootstrap-server 127.0.0.184:9092,127.0.0.185:9092,127.0.0.186:9092 --list
kafka的复制isr
https://blog.csdn.net/qq_37502106/article/details/80271800

5.日志异常

可能是/tmp文件夹满了?造成kafka写日志不成功?整个kafka宕机
[2019-03-20 11:45:30,540] ERROR Error while deleting segments for checker-014-1 in dir /tmp/kafka-logs (kafka.server.LogDirFailureChannel)
java.nio.file.NoSuchFileException: /tmp/kafka-logs/checker-014-1/00000000000000069699.log
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:409)
at sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262)
at java.nio.file.Files.move(Files.java:1395)
at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:786)
at org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:211)
at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:488)
at kafka.log.Log.asyncDeleteSegment(Log.scala:1751)
at kafka.log.Log.deleteSegment(Log.scala:1738)
at kafka.log.Log. a n o n f u n anonfun anonfundeleteSegments 3 ( L o g . s c a l a : 1309 ) a t k a f k a . l o g . L o g . 3(Log.scala:1309) at kafka.log.Log. 3(Log.scala:1309)atkafka.log.Log.anonfun$deleteSegments 3 3 3adapted(Log.scala:1309)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ResizableArray.foreach ( R e s i z a b l e A r r a y . s c a l a : 52 ) a t s c a l a . c o l l e c t i o n . m u t a b l e . A r r a y B u f f e r . f o r e a c h ( A r r a y B u f f e r . s c a l a : 48 ) a t k a f k a . l o g . L o g . (ResizableArray.scala:52) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at kafka.log.Log. (ResizableArray.scala:52)atscala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)atkafka.log.Log.anonfun$deleteSegments 2 ( L o g . s c a l a : 1309 ) a t s c a l a . r u n t i m e . j a v a 8. J F u n c t i o n 0 2(Log.scala:1309) at scala.runtime.java8.JFunction0 2(Log.scala:1309)atscala.runtime.java8.JFunction0mcI s p . a p p l y ( J F u n c t i o n 0 sp.apply(JFunction0 sp.apply(JFunction0mcI s p . j a v a : 12 ) a t k a f k a . l o g . L o g . m a y b e H a n d l e I O E x c e p t i o n ( L o g . s c a l a : 1837 ) a t k a f k a . l o g . L o g . d e l e t e S e g m e n t s ( L o g . s c a l a : 1300 ) a t k a f k a . l o g . L o g . d e l e t e O l d S e g m e n t s ( L o g . s c a l a : 1295 ) a t k a f k a . l o g . L o g . d e l e t e R e t e n t i o n M s B r e a c h e d S e g m e n t s ( L o g . s c a l a : 1368 ) a t k a f k a . l o g . L o g . d e l e t e O l d S e g m e n t s ( L o g . s c a l a : 1361 ) a t k a f k a . l o g . L o g M a n a g e r . sp.java:12) at kafka.log.Log.maybeHandleIOException(Log.scala:1837) at kafka.log.Log.deleteSegments(Log.scala:1300) at kafka.log.Log.deleteOldSegments(Log.scala:1295) at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1368) at kafka.log.Log.deleteOldSegments(Log.scala:1361) at kafka.log.LogManager. sp.java:12)atkafka.log.Log.maybeHandleIOException(Log.scala:1837)atkafka.log.Log.deleteSegments(Log.scala:1300)atkafka.log.Log.deleteOldSegments(Log.scala:1295)atkafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1368)atkafka.log.Log.deleteOldSegments(Log.scala:1361)atkafka.log.LogManager.anonfun$cleanupLogs 3 ( L o g M a n a g e r . s c a l a : 874 ) a t k a f k a . l o g . L o g M a n a g e r . 3(LogManager.scala:874) at kafka.log.LogManager. 3(LogManager.scala:874)atkafka.log.LogManager.anonfun$cleanupLogs 3 3 3adapted(LogManager.scala:872)
at scala.collection.TraversableLike W i t h F i l t e r . WithFilter. WithFilter.anonfun$foreach 1 ( T r a v e r s a b l e L i k e . s c a l a : 789 ) a t s c a l a . c o l l e c t i o n . i m m u t a b l e . L i s t . f o r e a c h ( L i s t . s c a l a : 389 ) a t s c a l a . c o l l e c t i o n . T r a v e r s a b l e L i k e 1(TraversableLike.scala:789) at scala.collection.immutable.List.foreach(List.scala:389) at scala.collection.TraversableLike 1(TraversableLike.scala:789)atscala.collection.immutable.List.foreach(List.scala:389)atscala.collection.TraversableLikeWithFilter.foreach(TraversableLike.scala:788)
at kafka.log.LogManager.cleanupLogs(LogManager.scala:872)
at kafka.log.LogManager. a n o n f u n anonfun anonfunstartup 2 ( L o g M a n a g e r . s c a l a : 395 ) a t k a f k a . u t i l s . K a f k a S c h e d u l e r . 2(LogManager.scala:395) at kafka.utils.KafkaScheduler. 2(LogManager.scala:395)atkafka.utils.KafkaScheduler.anonfun$schedule 2 ( K a f k a S c h e d u l e r . s c a l a : 114 ) a t k a f k a . u t i l s . C o r e U t i l s 2(KafkaScheduler.scala:114) at kafka.utils.CoreUtils 2(KafkaScheduler.scala:114)atkafka.utils.CoreUtils$anon 1. r u n ( C o r e U t i l s . s c a l a : 63 ) a t j a v a . u t i l . c o n c u r r e n t . E x e c u t o r s 1.run(CoreUtils.scala:63) at java.util.concurrent.Executors 1.run(CoreUtils.scala:63)atjava.util.concurrent.ExecutorsRunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access 301 ( S c h e d u l e d T h r e a d P o o l E x e c u t o r . j a v a : 180 ) a t j a v a . u t i l . c o n c u r r e n t . S c h e d u l e d T h r e a d P o o l E x e c u t o r 301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor 301(ScheduledThreadPoolExecutor.java:180)atjava.util.concurrent.ScheduledThreadPoolExecutorScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor W o r k e r . r u n ( T h r e a d P o o l E x e c u t o r . j a v a : 624 ) a t j a v a . l a n g . T h r e a d . r u n ( T h r e a d . j a v a : 748 ) S u p p r e s s e d : j a v a . n i o . f i l e . N o S u c h F i l e E x c e p t i o n : / t m p / k a f k a − l o g s / c h e c k e r − 014 − 1 / 00000000000000069699. l o g − > / t m p / k a f k a − l o g s / c h e c k e r − 014 − 1 / 00000000000000069699. l o g . d e l e t e d a t s u n . n i o . f s . U n i x E x c e p t i o n . t r a n s l a t e T o I O E x c e p t i o n ( U n i x E x c e p t i o n . j a v a : 86 ) a t s u n . n i o . f s . U n i x E x c e p t i o n . r e t h r o w A s I O E x c e p t i o n ( U n i x E x c e p t i o n . j a v a : 102 ) a t s u n . n i o . f s . U n i x C o p y F i l e . m o v e ( U n i x C o p y F i l e . j a v a : 396 ) a t s u n . n i o . f s . U n i x F i l e S y s t e m P r o v i d e r . m o v e ( U n i x F i l e S y s t e m P r o v i d e r . j a v a : 262 ) a t j a v a . n i o . f i l e . F i l e s . m o v e ( F i l e s . j a v a : 1395 ) a t o r g . a p a c h e . k a f k a . c o m m o n . u t i l s . U t i l s . a t o m i c M o v e W i t h F a l l b a c k ( U t i l s . j a v a : 783 ) . . . 32 m o r e [ 2019 − 03 − 2011 : 45 : 30 , 543 ] I N F O [ R e p l i c a M a n a g e r b r o k e r = 0 ] S t o p p i n g s e r v i n g r e p l i c a s i n d i r / t m p / k a f k a − l o g s ( k a f k a . s e r v e r . R e p l i c a M a n a g e r ) [ 2019 − 03 − 2011 : 45 : 30 , 543 ] E R R O R U n c a u g h t e x c e p t i o n i n s c h e d u l e d t a s k ′ k a f k a − l o g − r e t e n t i o n ′ ( k a f k a . u t i l s . K a f k a S c h e d u l e r ) o r g . a p a c h e . k a f k a . c o m m o n . e r r o r s . K a f k a S t o r a g e E x c e p t i o n : E r r o r w h i l e d e l e t i n g s e g m e n t s f o r c h e c k e r − 014 − 1 i n d i r / t m p / k a f k a − l o g s C a u s e d b y : j a v a . n i o . f i l e . N o S u c h F i l e E x c e p t i o n : / t m p / k a f k a − l o g s / c h e c k e r − 014 − 1 / 00000000000000069699. l o g a t s u n . n i o . f s . U n i x E x c e p t i o n . t r a n s l a t e T o I O E x c e p t i o n ( U n i x E x c e p t i o n . j a v a : 86 ) a t s u n . n i o . f s . U n i x E x c e p t i o n . r e t h r o w A s I O E x c e p t i o n ( U n i x E x c e p t i o n . j a v a : 102 ) a t s u n . n i o . f s . U n i x E x c e p t i o n . r e t h r o w A s I O E x c e p t i o n ( U n i x E x c e p t i o n . j a v a : 107 ) a t s u n . n i o . f s . U n i x C o p y F i l e . m o v e ( U n i x C o p y F i l e . j a v a : 409 ) a t s u n . n i o . f s . U n i x F i l e S y s t e m P r o v i d e r . m o v e ( U n i x F i l e S y s t e m P r o v i d e r . j a v a : 262 ) a t j a v a . n i o . f i l e . F i l e s . m o v e ( F i l e s . j a v a : 1395 ) a t o r g . a p a c h e . k a f k a . c o m m o n . u t i l s . U t i l s . a t o m i c M o v e W i t h F a l l b a c k ( U t i l s . j a v a : 786 ) a t o r g . a p a c h e . k a f k a . c o m m o n . r e c o r d . F i l e R e c o r d s . r e n a m e T o ( F i l e R e c o r d s . j a v a : 211 ) a t k a f k a . l o g . L o g S e g m e n t . c h a n g e F i l e S u f f i x e s ( L o g S e g m e n t . s c a l a : 488 ) a t k a f k a . l o g . L o g . a s y n c D e l e t e S e g m e n t ( L o g . s c a l a : 1751 ) a t k a f k a . l o g . L o g . d e l e t e S e g m e n t ( L o g . s c a l a : 1738 ) a t k a f k a . l o g . L o g . Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Suppressed: java.nio.file.NoSuchFileException: /tmp/kafka-logs/checker-014-1/00000000000000069699.log -> /tmp/kafka-logs/checker-014-1/00000000000000069699.log.deleted at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:396) at sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262) at java.nio.file.Files.move(Files.java:1395) at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:783) ... 32 more [2019-03-20 11:45:30,543] INFO [ReplicaManager broker=0] Stopping serving replicas in dir /tmp/kafka-logs (kafka.server.ReplicaManager) [2019-03-20 11:45:30,543] ERROR Uncaught exception in scheduled task 'kafka-log-retention' (kafka.utils.KafkaScheduler) org.apache.kafka.common.errors.KafkaStorageException: Error while deleting segments for checker-014-1 in dir /tmp/kafka-logs Caused by: java.nio.file.NoSuchFileException: /tmp/kafka-logs/checker-014-1/00000000000000069699.log at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:409) at sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262) at java.nio.file.Files.move(Files.java:1395) at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:786) at org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:211) at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:488) at kafka.log.Log.asyncDeleteSegment(Log.scala:1751) at kafka.log.Log.deleteSegment(Log.scala:1738) at kafka.log.Log. Worker.run(ThreadPoolExecutor.java:624)atjava.lang.Thread.run(Thread.java:748)Suppressed:java.nio.file.NoSuchFileException:/tmp/kafkalogs/checker0141/00000000000000069699.log>/tmp/kafkalogs/checker0141/00000000000000069699.log.deletedatsun.nio.fs.UnixException.translateToIOException(UnixException.java:86)atsun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)atsun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:396)atsun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262)atjava.nio.file.Files.move(Files.java:1395)atorg.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:783)...32more[2019032011:45:30,543]INFO[ReplicaManagerbroker=0]Stoppingservingreplicasindir/tmp/kafkalogs(kafka.server.ReplicaManager)[2019032011:45:30,543]ERRORUncaughtexceptioninscheduledtaskkafkalogretention(kafka.utils.KafkaScheduler)org.apache.kafka.common.errors.KafkaStorageException:Errorwhiledeletingsegmentsforchecker0141indir/tmp/kafkalogsCausedby:java.nio.file.NoSuchFileException:/tmp/kafkalogs/checker0141/00000000000000069699.logatsun.nio.fs.UnixException.translateToIOException(UnixException.java:86)atsun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)atsun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)atsun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:409)atsun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262)atjava.nio.file.Files.move(Files.java:1395)atorg.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:786)atorg.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:211)atkafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:488)atkafka.log.Log.asyncDeleteSegment(Log.scala:1751)atkafka.log.Log.deleteSegment(Log.scala:1738)atkafka.log.Log.anonfun$deleteSegments 3 ( L o g . s c a l a : 1309 ) a t k a f k a . l o g . L o g . 3(Log.scala:1309) at kafka.log.Log. 3(Log.scala:1309)atkafka.log.Log.anonfun$deleteSegments 3 3 3adapted(Log.scala:1309)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ResizableArray.foreach ( R e s i z a b l e A r r a y . s c a l a : 52 ) a t s c a l a . c o l l e c t i o n . m u t a b l e . A r r a y B u f f e r . f o r e a c h ( A r r a y B u f f e r . s c a l a : 48 ) a t k a f k a . l o g . L o g . (ResizableArray.scala:52) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at kafka.log.Log. (ResizableArray.scala:52)atscala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)atkafka.log.Log.anonfun$deleteSegments 2 ( L o g . s c a l a : 1309 ) a t s c a l a . r u n t i m e . j a v a 8. J F u n c t i o n 0 2(Log.scala:1309) at scala.runtime.java8.JFunction0 2(Log.scala:1309)atscala.runtime.java8.JFunction0mcI s p . a p p l y ( J F u n c t i o n 0 sp.apply(JFunction0 sp.apply(JFunction0mcI s p . j a v a : 12 ) a t k a f k a . l o g . L o g . m a y b e H a n d l e I O E x c e p t i o n ( L o g . s c a l a : 1837 ) a t k a f k a . l o g . L o g . d e l e t e S e g m e n t s ( L o g . s c a l a : 1300 ) a t k a f k a . l o g . L o g . d e l e t e O l d S e g m e n t s ( L o g . s c a l a : 1295 ) a t k a f k a . l o g . L o g . d e l e t e R e t e n t i o n M s B r e a c h e d S e g m e n t s ( L o g . s c a l a : 1368 ) a t k a f k a . l o g . L o g . d e l e t e O l d S e g m e n t s ( L o g . s c a l a : 1361 ) a t k a f k a . l o g . L o g M a n a g e r . sp.java:12) at kafka.log.Log.maybeHandleIOException(Log.scala:1837) at kafka.log.Log.deleteSegments(Log.scala:1300) at kafka.log.Log.deleteOldSegments(Log.scala:1295) at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1368) at kafka.log.Log.deleteOldSegments(Log.scala:1361) at kafka.log.LogManager. sp.java:12)atkafka.log.Log.maybeHandleIOException(Log.scala:1837)atkafka.log.Log.deleteSegments(Log.scala:1300)atkafka.log.Log.deleteOldSegments(Log.scala:1295)atkafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1368)atkafka.log.Log.deleteOldSegments(Log.scala:1361)atkafka.log.LogManager.anonfun$cleanupLogs 3 ( L o g M a n a g e r . s c a l a : 874 ) a t k a f k a . l o g . L o g M a n a g e r . 3(LogManager.scala:874) at kafka.log.LogManager. 3(LogManager.scala:874)atkafka.log.LogManager.anonfun$cleanupLogs 3 3 3adapted(LogManager.scala:872)
at scala.collection.TraversableLike W i t h F i l t e r . WithFilter. WithFilter.anonfun$foreach 1 ( T r a v e r s a b l e L i k e . s c a l a : 789 ) a t s c a l a . c o l l e c t i o n . i m m u t a b l e . L i s t . f o r e a c h ( L i s t . s c a l a : 389 ) a t s c a l a . c o l l e c t i o n . T r a v e r s a b l e L i k e 1(TraversableLike.scala:789) at scala.collection.immutable.List.foreach(List.scala:389) at scala.collection.TraversableLike 1(TraversableLike.scala:789)atscala.collection.immutable.List.foreach(List.scala:389)atscala.collection.TraversableLikeWithFilter.foreach(TraversableLike.scala:788)
at kafka.log.LogManager.cleanupLogs(LogManager.scala:872)
at kafka.log.LogManager. a n o n f u n anonfun anonfunstartup 2 ( L o g M a n a g e r . s c a l a : 395 ) a t k a f k a . u t i l s . K a f k a S c h e d u l e r . 2(LogManager.scala:395) at kafka.utils.KafkaScheduler. 2(LogManager.scala:395)atkafka.utils.KafkaScheduler.anonfun$schedule 2 ( K a f k a S c h e d u l e r . s c a l a : 114 ) a t k a f k a . u t i l s . C o r e U t i l s 2(KafkaScheduler.scala:114) at kafka.utils.CoreUtils 2(KafkaScheduler.scala:114)atkafka.utils.CoreUtils$anon 1. r u n ( C o r e U t i l s . s c a l a : 63 ) a t j a v a . u t i l . c o n c u r r e n t . E x e c u t o r s 1.run(CoreUtils.scala:63) at java.util.concurrent.Executors 1.run(CoreUtils.scala:63)atjava.util.concurrent.ExecutorsRunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access 301 ( S c h e d u l e d T h r e a d P o o l E x e c u t o r . j a v a : 180 ) a t j a v a . u t i l . c o n c u r r e n t . S c h e d u l e d T h r e a d P o o l E x e c u t o r 301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor 301(ScheduledThreadPoolExecutor.java:180)atjava.util.concurrent.ScheduledThreadPoolExecutorScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: java.nio.file.NoSuchFileException: /tmp/kafka-logs/checker-014-1/00000000000000069699.log -> /tmp/kafka-logs/checker-014-1/00000000000000069699.log.deleted
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:396)
at sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262)
at java.nio.file.Files.move(Files.java:1395)
at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:783)
… 32 more
[2019-03-20 11:45:30,554] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions checker-001-1,checker-011-0,__consumer_offsets-30,checker-021-2,checker-017-2,__consumer_offsets-21,aqjk-2-1,__consumer_offsets-27,checker-010-2,checker-020-1,aqjk-1-0,__consumer_offsets-9,checker-008-0,checker-002-2,checker-012-1,checker-022-0,checker-004-0,checker-024-2,checker-005-1,checker-015-0,yhqt-2-0,checker-009-1,checker-019-0,checker-013-2,checker-023-1,__consumer_offsets-33,checker-018-2,checker-001-0,checker-020-2,checker-007-2,checker-017-1,test-topic-2,checker-002-1,checker-012-0,yhqt-1-2,aqjk-1-1,__consumer_offsets-36,__consumer_offsets-42,aqjk-2-2,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,checker-014-2,checker-024-1,__consumer_offsets-24,checker-005-0,yhqt-2-2,checker-003-2,checker-013-1,checker-023-0,checker-006-1,checker-016-0,checker-006-2,checker-016-1,test-topic-1,checker-023-2,checker-008-1,checker-018-0,__consumer_offsets-48,checker-019-2,checker-008-2,checker-018-1,aqjk-1-2,checker-002-0,checker-007-0,checker-015-2,checker-004-2,checker-014-1,checker-024-0,checker-006-0,checker-003-1,checker-013-0,checker-001-2,checker-011-1,checker-021-0,__consumer_offsets-6,checker-010-0,yhqt-1-1,checker-007-1,checker-017-0,checker-012-2,checker-022-1,test-topic-0,checker-004-1,checker-014-0,checker-003-0,checker-009-2,checker-019-1,__consumer_offsets-0,__consumer_offsets-39,checker-022-2,__consumer_offsets-12,checker-009-0,checker-005-2,checker-015-1,__consumer_offsets-45,checker-016-2,checker-011-2,checker-021-1,checker-010-1,checker-020-0,yhqt-1-0,aqjk-2-0,yhqt-2-1 (kafka.server.ReplicaFetcherManager)
[2019-03-20 11:45:30,556] INFO [ReplicaAlterLogDirsManager on broker 0] Removed fetcher for partitions checker-001-1,checker-011-0,__consumer_offsets-30,checker-021-2,checker-017-2,__consumer_offsets-21,aqjk-2-1,__consumer_offsets-27,checker-010-2,checker-020-1,aqjk-1-0,__consumer_offsets-9,checker-008-0,checker-002-2,checker-012-1,checker-022-0,checker-004-0,checker-024-2,checker-005-1,checker-015-0,yhqt-2-0,checker-009-1,checker-019-0,checker-013-2,checker-023-1,__consumer_offsets-33,checker-018-2,checker-001-0,checker-020-2,checker-007-2,checker-017-1,test-topic-2,checker-002-1,checker-012-0,yhqt-1-2,aqjk-1-1,__consumer_offsets-36,__consumer_offsets-42,aqjk-2-2,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,checker-014-2,checker-024-1,__consumer_offsets-24,checker-005-0,yhqt-2-2,checker-003-2,checker-013-1,checker-023-0,checker-006-1,checker-016-0,checker-006-2,checker-016-1,test-topic-1,checker-023-2,checker-008-1,checker-018-0,__consumer_offsets-48,checker-019-2,checker-008-2,checker-018-1,aqjk-1-2,checker-002-0,checker-007-0,checker-015-2,checker-004-2,checker-014-1,checker-024-0,checker-006-0,checker-003-1,checker-013-0,checker-001-2,checker-011-1,checker-021-0,__consumer_offsets-6,checker-010-0,yhqt-1-1,checker-007-1,checker-017-0,checker-012-2,checker-022-1,test-topic-0,checker-004-1,checker-014-0,checker-003-0,checker-009-2,checker-019-1,__consumer_offsets-0,__consumer_offsets-39,checker-022-2,__consumer_offsets-12,checker-009-0,checker-005-2,checker-015-1,__consumer_offsets-45,checker-016-2,checker-011-2,checker-021-1,checker-010-1,checker-020-0,yhqt-1-0,aqjk-2-0,yhqt-2-1 (kafka.server.ReplicaAlterLogDirsManager)
[2019-03-20 11:45:30,617] INFO [ReplicaManager broker=0] Broker 0 stopped fetcher for partitions checker-001-1,checker-011-0,__consumer_offsets-30,checker-021-2,checker-017-2,__consumer_offsets-21,aqjk-2-1,__consumer_offsets-27,checker-010-2,checker-020-1,aqjk-1-0,__consumer_offsets-9,checker-008-0,checker-002-2,checker-012-1,checker-022-0,checker-004-0,checker-024-2,checker-005-1,checker-015-0,yhqt-2-0,checker-009-1,checker-019-0,checker-013-2,checker-023-1,__consumer_offsets-33,checker-018-2,checker-001-0,checker-020-2,checker-007-2,checker-017-1,test-topic-2,checker-002-1,checker-012-0,yhqt-1-2,aqjk-1-1,__consumer_offsets-36,__consumer_offsets-42,aqjk-2-2,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,checker-014-2,checker-024-1,__consumer_offsets-24,checker-005-0,yhqt-2-2,checker-003-2,checker-013-1,checker-023-0,checker-006-1,checker-016-0,checker-006-2,checker-016-1,test-topic-1,checker-023-2,checker-008-1,checker-018-0,__consumer_offsets-48,checker-019-2,checker-008-2,checker-018-1,aqjk-1-2,checker-002-0,checker-007-0,checker-015-2,checker-004-2,checker-014-1,checker-024-0,checker-006-0,checker-003-1,checker-013-0,checker-001-2,checker-011-1,checker-021-0,__consumer_offsets-6,checker-010-0,yhqt-1-1,checker-007-1,checker-017-0,checker-012-2,checker-022-1,test-topic-0,checker-004-1,checker-014-0,checker-003-0,checker-009-2,checker-019-1,__consumer_offsets-0,__consumer_offsets-39,checker-022-2,__consumer_offsets-12,checker-009-0,checker-005-2,checker-015-1,__consumer_offsets-45,checker-016-2,checker-011-2,checker-021-1,checker-010-1,checker-020-0,yhqt-1-0,aqjk-2-0,yhqt-2-1 and stopped moving logs for partitions because they are in the failed log directory /tmp/kafka-logs. (kafka.server.ReplicaManager)
[2019-03-20 11:45:30,618] INFO Stopping serving logs in dir /tmp/kafka-logs (kafka.log.LogManager)
[2019-03-20 11:45:30,623] ERROR Shutdown broker because all log dirs in /tmp/kafka-logs have failed (kafka.log.LogManager)

6.安全配置

http://kafka.apache.org/documentation/#security

https://www.cnblogs.com/xdp-gacl/p/3750965.html
https://www.cnblogs.com/f1194361820/p/4266511.html
https://blog.csdn.net/a351945755/article/details/22790745

http://www.orchome.com/170
http://www.orchome.com/171
http://www.orchome.com/553
http://www.orchome.com/185
http://www.orchome.com/330
http://www.orchome.com/186

https://blog.csdn.net/shenyue_sam/article/details/77175734
https://blog.csdn.net/hohoo1990/article/details/79110031

6.1.基本概念
https://blog.csdn.net/weixin_37746272/article/details/78652826
https://blog.csdn.net/mshootingstar/article/details/44019779

6.1.1.消息摘要:(数字指纹)

1、对一个任意长度的一个数据块进行计算,产生一个唯一指纹。MD5/SHA1发送给其他人你的信息和摘要,其他人用相同的加密方法得到摘要,最后进行比较摘要是否相同。
2、MD5消息摘要算法(英语:MD5 Message-Digest Algorithm),一种被广泛使用的密码散列函数,可以产生出一个128位(16字节)的散列值(hash value),用于确保信息传输完整一致。MD5由美国密码学家罗纳德·李维斯特(Ronald Linn Rivest)设计,于1992年公开,用以取代MD4算法。
3、安全散列算法(英语:Secure Hash Algorithm,缩写为SHA),SHA家族的五个算法,分别是SHA-1、SHA-224、SHA-256、SHA-384,和SHA-512,由美国国家安全局(NSA)所设计,并由美国国家标准与技术研究院(NIST)发布;是美国的政府标准。后四者有时并称为SHA-2。
4、参考使用commons codec中的DigestUtils类的md5、sha1、sha256等

6.1.2.对称加密
5、DES全称为Data Encryption Standard,即数据加密标准,是一种使用密钥加密的块算法,1977年被美国联邦政府的国家标准局确定为联邦资料处理标准(FIPS),并授权在非密级政府通信中使用,随后该算法在国际上广泛流传开来。需要注意的是,在某些文献中,作为算法的DES称为数据加密算法(Data Encryption Algorithm,DEA),已与作为标准的DES区分开来。

6、3DES(即Triple DES)是DES向AES过渡的加密算法,它使用3条56位的密钥对数据进行三次加密。是DES的一个更安全的变形。它以DES为基本模块,通过组合分组方法设计出分组加密算法。比起最初的DES,3DES更为安全。
7、高级加密标准(英语:Advanced Encryption Standard,缩写:AES),在密码学中又称Rijndael加密法,是美国联邦政府采用的一种区块加密标准。这个标准用来替代原先的DES,已经被多方分析且广为全世界所使用。经过五年的甄选流程,高级加密标准由美国国家标准与技术研究院(NIST)于2001年11月26日发布于FIPS PUB 197,并在2002年5月26日成为有效的标准。2006年,高级加密标准已然成为对称密钥加密中最流行的算法之一。
https://blog.csdn.net/jadyer/article/details/7615951
https://www.cnblogs.com/SirSmith/p/4987064.html

可以结合javax.crypto.Cipher和commons codec实现加密解密

6.1.3.非对称加密
8、RSA加密算法是一种非对称加密算法。在公开密钥加密和电子商业中RSA被广泛使用。在公开密钥密码体制中,加密密钥(即公开密钥)PK是公开信息,而解密密钥(即秘密密钥)SK是需要保密的。加密算法E和解密算法D也都是公开的。虽然解密密钥SK是由公开密钥PK决定的,由于无法计算出大数n的欧拉函数phi(N),所以不能根据PK计算出SK。

https://www.cnblogs.com/coky/p/6726409.html
http://free0007.iteye.com/blog/1985643
https://www.cnblogs.com/xlhan/p/7120488.html
https://blog.csdn.net/cz0217/article/details/78426733

Java的rsa算法实现

6.1.4.数字签名
9、只有信息的发送者才能产生的别人无法伪造的一段数字串,这段数字串同时也是对信息的发送者发送信息真实性的一个有效证明。数字签名是非对称密钥加密技术与数字摘要技术的应用。
10、信息发送者用其私钥对从所传报文中提取出的特征数据(或称数字指纹)进行RSA算法操作,以保证发信人无法抵赖曾发过该信息(即不可抵赖性),同时也确保信息报文在经签名后末被篡改(即完整性)。当信息接收者收到报文后,就可以用发送者的公钥对数字签名进行验证。
11、可以这么理解,签名是针对摘要,而不是明文;签名的加密用私钥而不是公钥;签名的解密是用公钥而不是私钥;签名是让别人确认自己,对方可以放签名者抵赖;加密是让别人无法获取明文;摘要是防篡改

6.1.5.CA, Certificate Authority
12、电子商务认证授权机构,也称为电子商务认证中心,是负责发放和管理数字证书的权威机构,并作为电子商务交易中受信任的第三方,承担公钥体系中公钥的合法性检验的责任。
13、防止伪造数字证书

6.1.6.PKCS
14、The Public-Key Cryptography Standards (PKCS)是由美国RSA数据安全公司及其合作伙伴制定的一组公钥密码学标准,其中包括证书申请、证书更新、证书作废表发布、扩展证书内容以及数字签名、数字信封的格式等方面的一系列相关协议。
15、PKCS#1:定义RSA公开密钥算法加密和签名机制,主要用于组织PKCS#7中所描述的数字签名和数字信封[22]。
16、PKCS#3:定义Diffie-Hellman密钥交换协议。
17、PKCS#5:描述一种利用从口令派生出来的安全密钥加密字符串的方法。使用MD2或MD5 从口令中派生密钥,并采用DES-CBC模式加密。主要用于加密从一个计算机传送到另一个计算机的私人密钥,不能用于加密消息。
18、PKCS#6:描述了公钥证书的标准语法,主要描述X.509证书的扩展格式。
19、PKCS#7:定义一种通用的消息语法,包括数字签名和加密等用于增强的加密机制,PKCS#7与PEM兼容,所以不需其他密码操作,就可以将加密的消息转换成PEM消息。
20、PKCS#8:描述私有密钥信息格式,该信息包括公开密钥算法的私有密钥以及可选的属性集等。
21、PKCS#9:定义一些用于PKCS#6证书扩展、PKCS#7数字签名和PKCS#8私钥加密信息的属性类型。
22、PKCS#10:描述证书请求语法[29]。
23、PKCS#11:称为Cyptoki,定义了一套独立于技术的程序设计接口,用于智能卡和PCMCIA卡之类的加密设备。
24、PKCS#12:描述个人信息交换语法标准。描述了将用户公钥、私钥、证书和其他相关信息打包的语法。
25、PKCS#13:椭圆曲线密码体制标准。
26、PKCS#14:伪随机数生成标准。
27、PKCS#15:密码令牌信息格式标准。

The Public-Key Cryptography Standards (PKCS)是由美国RSA数据安全公司及其合作伙伴制定的一组公钥密码学标准,其中包括证书申请、证书更新、证书作废表发布、扩展证书内容以及数字签名、数字信封的格式等方面的一系列相关协议。

6.1.7.JAAS和SASL
http://tetsu.iteye.com/blog/82627

SASL JAAS JSSE JGSS-API

全称:
SASL(Simple Authentication and Security Layer)
JAAS(JAVA Authentication and Authorization Service)
JSSE(Secure Socket Extension)
Java GSS-API(Java Generic Security Service Application Programming Interface)

定义:
JAAS – JAAS implements a Java version of the standard Pluggable Authentication Module (PAM) framework.
JAAS Reference:
http://java.sun.com/javase/6/docs/technotes/guides/security/jaas/JAASRefGuide.html#AppendixA

SASL – SASL is an Internet standard (RFC 2222) that specifies a protocol for authentication and optional establishment of a security layer between client and server applications. It provides pluggable authentication and security layer for network applications.
SASL Guide:
http://java.sun.com/j2se/1.5.0/docs/guide/security/sasl/sasl-refguide.html

JSSE – JSSE provides a framework and an implementation for a Java language version of the SSL and TLS protocols.
JSSE Reference:
http://java.sun.com/javase/6/docs/technotes/guides/security/jsse/JSSERefGuide.html

Java GSS-API – Java GSS is the Java language bindings for the Generic Security Service Application Programming Interface (GSS-API). The only mechanism currently supported underneath this API on J2SE is Kerberos v5. Java GSS-API is used for securely exchanging messages between communicating applications.

解释:
以前的Java已经具备了codesource-based access controls,也就是根据code来自哪里,来自谁来决定access controls。 但是缺少根据是谁在执行code来决定access code的能力。 JAAS就是为Java提供这个能力的。 JAAS只用来干两件事:Authentication 和 Authorization。 而具体的形式,就是通过LoginModule的login结果,来确定要执行代码的人是谁(Authentication)和有没有全力来做(Authorization)。

AAS,SASL和GSS-API都是pluggable fashion的,和具体的Authentication方式是分离的。因此可以SASL + GSS-API + Kerberos, 也可以JAAS + Kerberos。

JAAS是对Java platform自身功能的加强。而SASL是一个Internet标准协议,Java SASL只是这个协议的一个种Java实现。这点上和JSSE, GSS-API是类似的,JSSE是SSL标准协议的Java实现, Java GSS-API 是对GSS-API的Java实现。
因此,在$JAVA_HOME/lib/security/java.security中有:
security.provider.2=com.sun.net.ssl.internal.ssl.Provider #SunJSSE Provider
security.provider.5=sun.security.jgss.SunProvider #SunGSS-API Provider
security.provider.7=com.sun.security.sasl.Provider #SunSASL provider

使用哪个API,通常是由protocol的定义决定的。比如LDAP(Lightweight Directory Access Protocol)和IMAP(Internet Message Access Protocol)定义了要使用SASL。 而要实现SSL/TLS通信,则需要使用JSSE。当要用到Kerberos的时候,就需要Java GSS-API了。

另外,JSSE和GSS-API都是用来做通信加密的。不过JSSE是socket-based的API,而GSS-API是token-based的。也就是说JSSE只能使用socket,而GSS-API可以使用TCP sockets, UDP datagrams, 或者其他的。
对GSS-API和JSSE的更多比较参看这里:
http://java.sun.com/j2se/1.5.0/docs/guide/security/jgss/tutorials/JGSSvsJSSE.html

这4个通常还混合使用。比如最常见的模式是:SASL在客户端提供authentication, 而SASL的一种authentication mechanism就是GSS-API, 实现authentication最终通常会使用JAAS的LoginModule。 因此就形成了: SASL GSS-API/Kerberos V5 + LDAP的模式。这种模式常用来实现SSO(Single-Sign-On)。

Advanced Security Programming in Java 中列出了好几种组合模式:
http://java.sun.com/javase/6/docs/technotes/guides/security/jgss/lab/index.html#two

Link:
Java 6.0 Security的Document的总连接:
http://java.sun.com/javase/6/docs/technotes/guides/security/

Java Platform Security Architecture:
http://java.sun.com/javase/6/docs/technotes/guides/security/spec/security-spec.doc.html

Java Cryptography Architecture Reference & API:
http://java.sun.com/j2se/1.5.0/docs/guide/security/CryptoSpec.html

6.2.使用SSL加密和认证
6.2.1.为每个Kafka broker生成SSL密钥和证书
https://blog.csdn.net/jsjsjs1789/article/details/53161985

生成KeyPair,并且生成使用X.509 v3 自签名的证书。
keytool -keystore server4.keystore.jks -alias dev-mkaj-4.ceic.inc -validity 36500 -genkey -keyalg RSA
密码:!@#¥%……&*
First and last name:kafka chnenergy
Orgnization unit name: infotech
Orgnization name: chnenergy
City:beijing
province:beijing
Country code:CN
Key password:直接回车(和keystore密码一样)
查看
keytool -list -v -keystore server.keystore.jks
在kafka2.0以上,默认是开启主机名认证(host name verification)的,我们在创建秘钥对的时候没有用SAN或CN,不能验证fully qualified domain name (FQDN) ,所以要禁用掉主机名认证,如下:
ssl.endpoint.identification.algorithm=

6.2.2.创建CA

创建CA
openssl req -new -x509 -keyout ca-key -out ca-cert -days 36500

密码同样用!@#$%^&*
-x509 生成自签名的证书(即root CA)
生成的ca-key文件为私钥
生成的ca-cert文件为PKCS#10格式的证书(公钥应该存在这里)

将生成的CA添加到clients’ truststore(客户的信任库),以便client可以信任这个CA:
keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert

将生成的CA添加到server’ truststore(服务端的信任库),以便服务端可以信任这个CA:
keytool -keystore server.truststore.jks -alias CARoot -import -file ca-cert

6.2.3.签名证书

ca-cert
1)机构A使用certreq生成一个证书签名请求文件(CSR certificate sign request),生成的文件就代表一个请求。

2)CA(证书认证中心)接收到这个请求后,经过处理(使用gencert),会生成一个证书或者证书链。

3)机构A接收到响应,将证书导入(importcert)到keystore中。

证书签名请求文件
keytool -keystore server4.keystore.jks -alias dev-mkaj-4.ceic.inc -certreq -file cert-file4
签名:
openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file4 -out cert-signed4 -days 36500 -CAcreateserial -passin pass:!@#KaTeX parse error: Expected 'EOF', got '#' at position 30: …-passin pass:!@#̲%^&*去掉,交互输入密码即可
导入:

keytool -keystore server4.keystore.jks -alias CARoot -import -file ca-cert
keytool -keystore server4.keystore.jks -alias dev-mkaj-4.ceic.inc -import -file cert-signed4
注意kafka这个别名是replace

6.2.4.broker设置

listeners=PLAINTEXT://127.0.0…105:9091,SSL://127.0.0.105:9092

ssl.endpoint.identification.algorithm=
ssl.keystore.location=/var/ssl/server.keystore.jks
ssl.keystore.password=!@#KaTeX parse error: Expected 'EOF', got '#' at position 25: …key.password=!@#̲%^&*
ssl.truststore.location=/var/ssl/server.truststore.jks
ssl.truststore.password=!@#$%^&*

ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2

验证 openssl s_client -debug -connect dev-mkaj-4.ciec.inc:9092 -tls1_2

6.2.5.客户端

security:
protocol: SSL
ssl:
enabled:
protocols: TLSv1.2
key:
#password: “!@#$%^&*”

truststore:
  location: C:/Users/zhanglu/client.truststore.jks
  password: "!@#$%^&*"
keystore:

#location: C:/Users/zhanglu/client.keystore.jks
#password: “!@#$%^&*”

https://blog.csdn.net/write_down/article/details/79114573
在C:\Java64\jdk1.8.0_91\jre\lib\security目录下
C:\Java64\jre1.8.0_91\lib\security

keytool -keystore cacerts -alias CARoot -import -file C:\Users\zhanglu\ca-cert

bin/kafka-console-producer.sh --broker-list dev-mkaj-4.ceic.inc:9092 --topic test --producer.config config/client-ssl.properties
bin/kafka-console-consumer.sh --bootstrap-server dev-mkaj-4.ceic.inc:9092 --topic test --consumer.config config/client-ssl.properties

https://blog.csdn.net/catoop/article/details/80819638
https://blog.csdn.net/zziamalei/article/details/46520797?utm_source=blogxgwz0

PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
本次实验产生这个问题的原因是服务端的三个证书中有两个是有错误的,alias和实际的私钥不对应。用keytool删掉对应关系错误的内容,重新import即可正确
Keytool -delete -alias dev-mkaj-4.ceic.inc -keystore server4.keystore.jks

当broker设置ssl.client.auth为required或requested时,需要客户端认证,即要求客户端有证书。如果不希望客户端认证,将其设为none

6.2.6.Sasl设置
可以使用四种sasl机制
GSSAPI (Kerberos) 需要搭建kerberos服务器
PLAIN
SCRAM-SHA-256 需要创建scram证书
SCRAM-SHA-512 需要创建scram证书

所以仅选择PLAIN

Broker端的jaas文件
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username=“admin”
password=“admin!@#”
user_admin="admin!@#
user_aqjk=“aqjk!@#”;
};
这个配置定义了两个用户 (admin and aqjk). broker 使用在 KafkaServer 部分的 username 和 password 属性初始化与其他 broker 的连接. 在这个例子中, admin 是 broker 间通信的用户. user_userName 属性的值定义了所有连接到 broker 的用户的密码. 这个 broker 验证所有客户端的连接, 包括那些使用了这些配置的 broker 的连接.

在bin/kafka-run-class.sh中增加
-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf
KAFKA_JVM_PERFORMANCE_OPTS="-server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -Djava.awt.headless=true -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf"

listeners=SASL_SSL://host.name:port
security.inter.broker.protocol=SASL_SSL
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN

客户端
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required
username=“aqjk”
password=“aqjk!@#”;

6.2.7.授权和acl

authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
super.users=User:admin

6.2.8.客户端证书

7.禁用ipv6
Redhat默认打开ipv6,
使用netstat命令看看是不是有tcp6的内容如果有则肯定是启用了ipv6

8.Kafka-eagle
https://ke.smartloli.org/

https://github.com/smartloli/kafka-eagle/tree/v1.2.4

9.Keepalived
https://blog.csdn.net/wngua/article/details/54668794

10.Kafka和zookeeper acl
https://www.jianshu.com/p/392248ab27f4

./bin/zkCli.sh

添加用户
addauth digest :
设置权限,id填与不填都为所有用户(应该是设置前的所有用户)
setAcl auth::

setAcl / ip:127.0.0.1:cdrwa
echo -n : | openssl dgst -binary -sha1 | openssl base64

setAcl /test digest:zoo:q0rfh3mamQNUEBya6WkjYxP8lRM=:rwdca

在bin/kafka-run-class.sh中增加
KAFKA_OPTS="$KAFKA_OPTS -Djava.security.auth.login.config=/opt/kafka_2.12-2.0.0/config/kafka_server_jaas.conf "

nohup bin/kafka-server-start.sh config/server.properties -D 1>/dev/null 2>&1 &

注意jaas文件的写法
zookeeper{

org.apache.kafka.common.security.plain.PlainLoginModule required

      username="admin"

      password="admin"

      user_admin="admin";

};

11.Mariadb
https://blog.csdn.net/scorpio3k/article/details/39378655

yum -y install mariadb-server mariadb
systemctl start mariadb.service
systemctl enable mariadb.service

mysqladmin -u root -h localhost password “root”
mysql -u root -proot

use mysql
update user set host = ‘%’ where user = ‘root’;
flush privileges;
select host, user from user;

发送示例:
AQJKMKXX20180901120001[00 00 00 08]12345678
心跳包
[01]ELCTMKXX20180901120001[00 00 00 00]
数据包
[01]AQJKMKXX20180901120001[00 00 00 08]12345678

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值