kafka definitive guide - reading notes

一、认识Kafka
1)什么是sub/pub模型, 发布订阅模型
 
Publish/subscribe messaging is a pattern that is characterized by the sender (publisher) of a piece of data (message) not specifically directing it to a receiver. Instead, the publisher classifies the message somehow, and that receiver (subscriber) subscribes to receive certain classes of messages. Pub/sub systems often have a broker, a central point where messages are published, to facilitate this.
 
2)Kakfa新特性:connector和stream process
connector用于将数据导入Kafka,如从Mysql,Hbase等等,或由Kafka导入其它存储介质,如MySql;
stream api用于流式处理,传统上,Kafka后接Spark,Stream等平台;但现在可以不需要了;
There are also advanced client APIs—Kafka Connect API for data inte‐ gration and Kafka Streams for stream processing.
 
3)如何让特定消息进入特定的Partitions
只需要给消息设置一个Key,如使用shopid作为Key,则所有同一店家的商品都进行同一个Partition
This is typically done using the mes‐ sage key and a partitioner that will generate a hash of the key and map it to a specific partition.
 
4)一个Partition是否可以被同一Group的多个Member消费?
不可以;如果一个topic有3个Partition,但某Group A中有4个consumer,则其中一个必闲置;其它3个consumer每人消费一个Partition
The group assures that each partition is only con‐ sumed by one member.
 
5)Cluster中存在Special One
叫作控制节点,它主要用来:将Partition分配给Broker,监控Broker的失败;这个跟ElasticSearch类似,ES中也存在一个Special节点,名字叫Master节点;它用来管理整个集群的状态信息,如Node的加入或退出,当前所有Node的健康状态等;
Within a cluster of brokers, one broker will also function as the cluster controller (elected automatically from the live members of the cluster).
 
6)可以为Topic为单位配置数据过期时间
Kafka的一个特性就是可以持久化数据,保存时间可以配置;并且可以为不同Topic不同配置;
Individual topics can also be config‐ ured with their own retention settings so that messages are stored for only as long as they are useful.
并且Topc可以被配置成 log compacted的,同一Key的Message只保留最后一条;Wonderful Feature!
Topics can also be configured as log compacted, which means that Kafka will retain only the last mes‐ sage produced with a specific key.
 
7)官方提供了在不同集群同步数据的工具:
MirrorMaker
 
8)安装Zookeeper做什么?
Kafka使用Zookeeper保存元数据;现在版本的Offset数据是保存在_consumer_offset的topic中自管理的;
 
9)Partition的个数只能增不能减!!
Keep in mind that the number of partitions for a topic can only be increased, never decreased.
Partition的个数一般可以设置为与Broker个数相等或倍数;
Many users will have the partition count for a topic be equal to, or a multiple of, the number of brokers in the cluster.
 
10)Partiton个数不能太多
一个原因每个Partition都要占用内存与其它资源,并且会增长leader选举时间
Avoid overestimating, as each partition uses memory and other resources on the broker and will increase the time for leader elections.
 
11)设置kafka中消息保留时间
下面3个配置都可以,小的单位的设置会覆盖大的,因此一般选ms
log.retention.hours,log.retention.minutes and log.retention.ms. All three of these specify the same configuration
另外还有一个根据消息占空间大小的配置,这2个任一个满足都会删:
This value is set using the log.retention.bytes parameter, and it is applied per-partition.
 
log.segment.bytes
A smaller log-segment size means that files must be closed and allocated more often, which reduces the overall efficiency of disk writes.
Adjusting the size of the log segments can be important if topics have a low produce rate.
 
12)Broker可以限制每条消息的最大字节数
message.max.bytes
The Kafka broker limits the maximum size of a message that can be produced
 
二、Producer
1)先理清业务需要,是追求高吞吐(程序日志丢点无所谓),还是不容忍任何一条数据的丢失(订单交易),是否允许消息重复;
Those diverse use cases also imply diverse requirements: is every message critical, or can we tolerate loss of messages? Are we OK with accidentally duplicating messages? Are there any strict latency or throughput requirements we need to support?
 
2)消息根据Partitioner发往不同Topic;如果没有Key,则轮徇方式选择Partition
It then adds the record to a batch of records that will also be sent to the same topic and partition. A separate thread is responsible for sending those batches of records to the appropriate Kafka brokers.
 
When the key is null and the default partitioner is used, the record will be sent to one of the available partitions of the topic at random. A round-robin algorithm will be used to balance the messages among the partitions.
 
If a key exists and the default partitioner is used, Kafka will hash the key (using its own hash algorithm, so hash values will not change when Java is upgraded), and use the result to map the message to a specific partition.
 
3)使用什么样的数据压缩方式
Snappy是Google发明的,兼顾了压缩率与CPU占用;
Gzip使用更高Cpu但获得更好的压缩率;看情况选择,不知道的时候选用Snappy
Snappy compression was invented by Google to provide decent compression ratios with low CPU overhead and good performance, so it is recommended in cases where both performance and bandwidth are a concern. Gzip compression will typically use more CPU and time but result in better compression ratios
 
4)batch.size,linger.ms,max.request.size参数
memory in bytes (not messages!)
linger.ms controls the amount of time to wait for additional messages before send‐ ing the current batch.
max.request.size:Producer发送的最大消息大小,byte单位;Broker也有一个相同配置(上面提到了message.max.bytes),两者最好一致
 
三,Consumer
1)如何提高Consume速度
The main way we scale data consumption from a Kafka topic is by adding more con‐ sumers to a consumer group.
 
Keep in mind that there is no point in adding more consumers than you have partitions in a topic—some of the consumers will just be idle.
 
2)什么是rebalance
Moving partition ownership from one consumer to another is called a rebalance.
一个新的consumer加入或退出,要重新给consumer分配其消费的Partition;rebalance会stop the word,停止服务,等分配完后,consumer从上次commit的地方开始消费;
因此已经处理但未commit的消息会被重复消费;
kafka不保证exactly once,但保证at least once;
 
The way consumers maintain membership in a consumer group and ownership of the partitions assigned to them is by sending heartbeats to a Kafka broker designated as the group coordinator
 
group coordinator负责协调consumer的加入或退出,重新分配partition
 
3)consumer中入group的过程如何?
consumer先发送一个JoinGroup请求到group coordinator,第一个加入group的会成为group leader;
leader从group coordinator得到所有consumer(还有心跳的)的信息;
使用PartitionAssignor方法,决定哪个partition被分到哪个consumer
When a consumer wants to join a group, it sends a JoinGroup request to the group coordinator. The first consumer to join the group becomes the group leader. The leader receives a list of all consumers in the group from the group coordinator (this will include all consumers that sent a heartbeat recently and which are therefore considered alive) and is responsible for assigning a subset of partitions to each consumer. It uses an implementation of PartitionAssignor to decide which partitions should be handled by which consumer.
leader分好了后,将结果发给GroupCoordinator,它将信息通知给所有Consumer;每个Consumer只能看到自己分配到的Partition;
这个过程每次rebalance都会发生;
Kafka has two built-in partition assignment policies, which we will discuss in more depth in the configuration section. After deciding on the partition assignment, the consumer leader sends the list of assignments to the GroupCoordinator, which sends this informa‐ tion to all the consumers. Each consumer only sees his own assign‐ ment—the leader is the only client process that has the full list of consumers in the group and their assignments. This process repeats every time a rebalance happens.
 
4)线程安全
One consumer per thread is the rule.
 
5)session超时配置
session.timeout.ms
The amount of time a consumer can be out of contact with the brokers while still considered alive defaults to 3 seconds. If more than session.timeout.ms passes without the consumer sending a heartbeat to the group coordinator, it is considered dead and the group coordinator will trigger a rebalance of the consumer group to allocate partitions from the dead consumer to the other consumers in the group.
 
超过这个时间,consumer没心跳发到group coordinator,被认为退出group,则进行rebalance默认为3秒;
 
6)如何提交offset?
How does a consumer commit an offset? It produces a message to Kafka, to a special __consumer_offsets topic, with the committed offset for each partition.
In order to know where to pick up the work, the consumer will read the latest committed offset of each partition and con‐ tinue from there.
 
7)如何进行controller选举?
When the controller broker is stopped or loses connectivity to Zookeeper, the ephem‐ eral node will disappear. Other brokers in the cluster will be notified through the Zookeeper watch that the controller is gone and will attempt to create the controller node in Zookeeper themselves. The first node to create the new controller in Zoo‐ keeper is the new controller, while the other nodes will receive a “node already exists” exception and re-create the watch on the new controller node. Each time a controller is elected, it receives a new, higher controller epoch number through a Zookeeper con‐ ditional increment operation. The brokers know the current controller epoch and if they receive a message from a controller with an older number, they know to ignore it.
 
The controller uses the epoch number to prevent a “split brain” scenario where two nodes believe each is the current controller.
 
8)什么是in-sync replicas
Kafka的副本机制(高可用),Kafka的副本久作为备份使用,所有的read和write操作都在leader上进行;
 
9)消息顺序保证
All requests sent to the broker from a specific client will be processed in the order in which they were received—this guarantee is what allows Kafka to behave as a message queue and pro‐ vide ordering guarantees on the messages it stores.
 
每个partition上的消息保证有序的;(发送过来的顺序),但Topic不同partition间不保证;
如何设计一个API:
All requests have a standard header that includes:
  • Request type (also called API key)相当于我们系统中的command;
  • Request version (so the brokers can handle clients of different versions and respond accordingly),要有版本号
  • Correlation ID: a number that uniquely identifies the request and also appears in the response and in the error logs (the ID is used for troubleshooting),要带一个ID区分不同请求
  • Client ID: used to identify the application that sent the request;可以区分哪个客户端发来的请求,BUG追踪;
 
10)Client怎么样将请求发到Leader Partitions上而不是replica?
How do the clients know where to send the requests? Kafka clients use another request type called a metadata request, which includes a list of topics the client is interested in. The server response specifies which partitions exist in the topics, the replicas for each partition, and which replica is the leader. Metadata requests can be sent to any broker because all brokers have a metadata cache that contains this infor‐ mation.
先发一个元数据的请求;这么简单;本地缓存这个元数据,如果重新选举了leader,会收到一个错误,“Not a Leader for Partition.”;则重新拉一次Matadata就OK了;
 
11)如何回复请求?
Once the message is written to the leader of the partition, the broker examines the acks configuration—if acks is set to 0 or 1, the broker will respond immediately; if acks is set to all, the request will be stored in a buffer called purgatory until the leader observes that the follower replicas replicated the message, at which point a response is sent to the client.
 
如果是ack设置为all,要等所有Replica同步完成返回成功后,再返回给Client,这期间请求被缓存在Buffer中;
 
 
12)请求一条已经过期的数据?
返回一个错误;
If the client is asking for a message that is so old that it got deleted from the partition or an offset that does not exist yet, the broker will respond with an error.
 
13)Kafka的 zero copy技术
直接将Message从文件写入到Network的缓冲区,避免文件-》内存,再内存-》net缓冲区(内核态) 的拷贝;
Kafka famously uses a zero-copy method to send the messages to the clients—this means that Kafka sends messages from the file (or more likely, the Linux filesystem cache) directly to the net‐ work channel without any intermediate buffers. This is different than most databases where data is stored in a local cache before being sent to clients. This technique removes the overhead of copying bytes and managing buffers in memory, and results in much improved performance.
 
14)写入leader的消息,直接可以被Consume了吗?
答案是No!只能in-sync replica都收到消息了,才能说这条消息落袋了,因此现在还不能消费;
但写入Leader的消息,conumse会返回一个空,而非error;
It is also interesting to note that not all the data that exists on the leader of the parti‐ tion is available for clients to read. Most clients can only read messages that were written to all in-sync replicas (follower replicas, even though they are consumers, are exempt from this—otherwise replication would not work). We already discussed that the leader of the partition knows which messages were replicated to which replica, and until a message was written to all in-sync replicas, it will not be sent to consum‐ ers—attempts to fetch those messages will result in an empty response rather than an error.
 
15)replica怎么才能到in-sync中?
只要replica落后leader的时间在 replica.lag.time.max.ms内,即在in-sync中;
This delay is limited to replica.lag.time.max.ms—the amount of time a replica can be delayed in replicat‐ ing new messages while still being considered in-sync.
 
四、Reliable Data Delivery

转载于:https://www.cnblogs.com/gm-201705/p/9522084.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
图像识别技术在病虫害检测中的应用是一个快速发展的领域,它结合了计算机视觉和机器学习算法来自动识别和分类植物上的病虫害。以下是这一技术的一些关键步骤和组成部分: 1. **数据收集**:首先需要收集大量的植物图像数据,这些数据包括健康植物的图像以及受不同病虫害影响的植物图像。 2. **图像预处理**:对收集到的图像进行处理,以提高后续分析的准确性。这可能包括调整亮度、对比度、去噪、裁剪、缩放等。 3. **特征提取**:从图像中提取有助于识别病虫害的特征。这些特征可能包括颜色、纹理、形状、边缘等。 4. **模型训练**:使用机器学习算法(如支持向量机、随机森林、卷积神经网络等)来训练模型。训练过程中,算法会学习如何根据提取的特征来识别不同的病虫害。 5. **模型验证和测试**:在独立的测试集上验证模型的性能,以确保其准确性和泛化能力。 6. **部署和应用**:将训练好的模型部署到实际的病虫害检测系统中,可以是移动应用、网页服务或集成到智能农业设备中。 7. **实时监测**:在实际应用中,系统可以实时接收植物图像,并快速给出病虫害的检测结果。 8. **持续学习**:随着时间的推移,系统可以不断学习新的病虫害样本,以提高其识别能力。 9. **用户界面**:为了方便用户使用,通常会有一个用户友好的界面,显示检测结果,并提供进一步的指导或建议。 这项技术的优势在于它可以快速、准确地识别出病虫害,甚至在早期阶段就能发现问题,从而及时采取措施。此外,它还可以减少对化学农药的依赖,支持可持续农业发展。随着技术的不断进步,图像识别在病虫害检测中的应用将越来越广泛。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值