Apache Kafka 2.0.0

Apache Kafka 2.0.0 已正式发布,这是一个主要版本,新增了许多重要的新功能。此外还包括许多重要的 bug 修复和改进,其中还包括一些严重的错误修复。

Apache Kafka 2.0.0 下载地址 >>> https://kafka.apache.org/downloads#2.0.0

值得关注的新特性

  • KIP-290 adds support for prefixed ACLs, simplifying access control management in large secure deployments. Bulk access to topics, consumer groups or transactional ids with a prefix can now be granted using a single rule. Access control for topic creation has also been improved to enable access to be granted to create specific topics or topics with a prefix.
    KIP-290增加了对前缀ACL的支持,简化了在大型安全部署中的访问控制管理。现在可以使用单个规则来对主题、消费群体或具有前缀的事务ID进行批量访问。主题创建的访问控制也得到了改进,以便允许访问以创建具有前缀的特定主题或主题。

  • KIP-255 adds a framework for authenticating to Kafka brokers using OAuth2 bearer tokens. The SASL/OAUTHBEARER implementation is customizable using callbacks for token retrieval and validation.
    KIP-255增加了一个使用OAuth2承载令牌对KafkaBroker进行认证的框架。SASL/OAuthBurER实现可使用回调进行令牌检索和验证。

  • Host name verification is now enabled by default for SSL connections to ensure that the default SSL configuration is not susceptible to man-in-the-middle attacks. You can disable this verification if required.
    默认情况下,启用SSL连接的主机名验证,以确保默认SSL配置不受中间人攻击的影响。如果需要,可以禁用此验证。

  • You can now dynamically update SSL truststores without broker restart. You can also configure security for broker listeners in ZooKeeper before starting brokers, including SSL keystore and truststore passwords and JAAS configuration for SASL. With this new feature, you can store sensitive password configs in encrypted form in ZooKeeper rather than in cleartext in the broker properties file.
    您现在可以在不重新启动代理的情况下动态更新SSL信任库。 您还可以在启动代理之前在ZooKeeper中为代理侦听器配置安全性,包括SSL密钥库和信任库密码以及SASL的JAAS配置。 使用此新功能,您可以在ZooKeeper中以加密形式存储敏感密码配置,而不是在代理属性文件中以明文形式存储。

  • The replication protocol has been improved to avoid log divergence between leader and follower during fast leader failover. We have also improved resilience of brokers by reducing the memory footprint of message down-conversions. By using message chunking, both memory usage and memory reference time have been reduced to avoid OutOfMemory errors in brokers.
    复制协议已得到改进,以避免在快速领导者故障转移期间leader和follower之间的日志分歧。 我们还通过减少消息下转换的内存占用来提高代理的恢复能力。 通过使用消息分块,内存使用和内存引用时间都已减少,以避免代理中的OutOfMemory(内存不足)错误。

  • Kafka clients are now notified of throttling before any throttling is applied when quotas are enabled. This enables clients to distinguish between network errors and large throttle times when quotas are exceeded.
    现在,在启用配额之前应用任何限制之前,Kafka客户端会收到限制通知。 这使客户能够在超过配额时区分网络错误和大的节流时间。

  • We have added a configuration option for Kafka consumer to avoid indefinite blocking in the consumer.
    我们为Kafka消费者添加了一个配置选项,以避免消费者无限期阻止。

  • We have dropped support for Java 7 and removed the previously deprecated Scala producer and consumer.
    我们已经放弃了对Java 7的支持,并删除了之前弃用的Scala生产者和消费者。

  • Kafka Connect includes a number of improvements and features. KIP-298 enables you to control how errors in connectors, transformations and converters are handled by enabling automatic retries and controlling the number of errors that are tolerated before the connector is stopped. More contextual information can be included in the logs to help diagnose problems and problematic messages consumed by sink connectors can be sent to a dead letter queue rather than forcing the connector to stop.
    Kafka Connect包含许多改进和功能。 KIP-298使您能够通过启用自动重试和控制连接器停止前容许的错误数来控制连接器,转换和转换器中的错误处理方式。 日志中可以包含更多上下文信息,以帮助诊断问题,并且可以将接收器连接器消耗的有问题消息发送到死信队列,而不是强制连接器停止。

  • KIP-297 adds a new extension point to move secrets out of connector configurations and integrate with any external key management system. The placeholders in connector configurations are only resolved before sending the configuration to the connector, ensuring that secrets are stored and managed securely in your preferred key management system and not exposed over the REST APIs or in log files.
    KIP-297增加了一个新的扩展点,可以将密钥从连接器配置中移除,并与任何外部密钥管理系统集成。 连接器配置中的占位符仅在将配置发送到连接器之前解析,确保在首选密钥管理系统中安全地存储和管理机密,而不是通过REST API或日志文件公开。

  • We have added a thin Scala wrapper API for our Kafka Streams DSL, which provides better type inference and better type safety during compile time. Scala users can have less boilerplate in their code, notably regarding Serdes with new implicit Serdes.
    我们为Kafka Streams DSL添加了一个瘦Scala包装器API,它在编译期间提供了更好的类型推断和更好的类型安全性。 Scala用户可以在代码中使用更少的样板,特别是关于具有新隐式Serdes的Serdes。

  • Message headers are now supported in the Kafka Streams Processor API, allowing users to add and manipulate headers read from the source topics and propagate them to the sink topics.
    Kafka Streams Processor API现在支持消息头,允许用户添加和操作从源主题读取的头,并将它们传播到接收器主题。

  • Windowed aggregations performance in Kafka Streams has been largely improved (sometimes by an order of magnitude) thanks to the new single-key-fetch API.
    由于采用了新的单键获取API,Kafka Streams中的窗口聚合性能已大大提高(有时甚至达到一个数量级)。

  • We have further improved unit testibility of Kafka Streams with the kafka-streams-testutil artifact.
    我们使用kafka-streams-testutil工件进一步改进了Kafka Streams的单元可测性。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值