kafka参数
https://www.cnblogs.com/angellst/p/9368493.html
kafkaAPI
https://blog.csdn.net/gabele/article/details/72083362?utm_medium=distribute.pc_relevant_t0.none-task-blog-BlogCommendFromMachineLearnPai2-1.nonecase&depth_1-utm_source=distribute.pc_relevant_t0.none-task-blog-BlogCommendFromMachineLearnPai2-1.nonecase
被压
https://blog.csdn.net/qq_37142346/article/details/88765552
Kafka史上最详细原理总结
https://www.jianshu.com/p/3d2bbbeea14f
http://www.jasongj.com/2015/03/10/KafkaColumn1/
Kafka 分区策略
for (i <- 51 to 60) {
// 3 封装的对象
//将数据发送到指定的分区变号
//val record = new ProducerRecord[String, String](topic, 1 , null,“myvalue:”+i)
// 指定数据均匀写入3个分区中
//val partitionNum = i % 3
//val record = new ProducerRecord[String, String](topic, partitionNum, null,“myvalue:”+i)
//不指定分区编号,指定key, 分区编号 = key.hasacode % 3
//val record = new ProducerRecord[String, String](topic , “doit”,“myvalue:”+i)
//根据key的hashcode值模除以topic分区的数量,返回一个分区编号
//val record = new ProducerRecord[String, String](topic , UUID.randomUUID().toString ,“myvalue:”+i)
//没有指定Key和分区,默认的策略就是轮询,将数据均匀写入多个分区中
val record = new ProducerRecord[String, String](topic,“value-” + i)
// 4 发送消息
producer.send(record)
}
Kafka HA Kafka一致性重要机制之ISR(kafka replica)
https://blog.csdn.net/qq_37502106/article/details/80271800
Kafka读写机制深度剖析
https://blog.csdn.net/ioteye/article/details/90736737#01__3
https://www.cnblogs.com/huxi2b/p/6223228.html
KafkaController介绍
https://blog.csdn.net/zhanglh046/article/details/72821995#commentBox
https://www.cnblogs.com/huxi2b/p/6980045.html
kafka 分区 leader 选举机制原理
https://blog.csdn.net/bigtree_3721/article/details/82288483
https://www.cnblogs.com/xifenglou/p/7251112.html
kafka整合sparkStreaming
https://www.cnblogs.com/cac2020/p/10763483.html
https://www.cnblogs.com/dummyly/p/10007984.html
sparkStreaming的分区数是有kafka的分区数量来决定的
什么是 Kafka Rebalance 以及关于 Rebalance Kafka-Python 社区客户端应该关注的地方
https://www.cnblogs.com/piperck/p/11201896.html
大厂面试Kafka,一定会问到的幂等性
https://www.codercto.com/a/34125.html
kafka0.11之前通过isr+ack机制可保证数据不丢,却不能保证不重复
kafka
最新推荐文章于 2021-04-27 00:00:00 发布