使用Apache Kafka实施事件驱动的微服务

在谈论微服务架构时,大多数人会想到通过HTTP进行通信的无状态服务网络(一个人可以称其为RESTful或不基于RESTpicker的)。 但是还有另一种方法,根据当前的使用情况,它可能更适合。 我说的是事件驱动的微服务,除了经典的请求-响应模式之外,服务还发布表示事件(事实)的消息,并订阅主题(或队列,具体取决于所使用的术语)以接收事件/消息。 完全理解和接受这种新的软件设计范式不是直截了当的,但它是完全值得的(至少要研究一下)。 为了探索事件驱动设计的优势以及导致其发展的路径,需要探索几个相互关联的概念,例如:

  • 日志(包括日志结构化存储引擎和预写日志)物化视图活动采购命令查询责任隔离(CQRS)流处理”反了” databases (a.k.a. ”非捆绑” databases)

我想向您指出以下书籍,以熟悉这些主题:

我阅读了这三本书,然后开始构建简单的PoC,因为学习新的设计思想虽然很棒,但是直到您将它们付诸实践后才能完成。 另外,我对在线提供的事件驱动应用程序/服务的示例感到不满意,我发现它们过于简单且无法正确解释,因此我决定创建一个示例。

The proof of concept

The source code is split in two GitHub repositories (as per the Clean Architecture):

The proof of concept service keeps track of the balance available in bank accounts (like a ledger).
It listens for Transfer messages on a Kafka topic and when one is received, it updates the balance of the related account by publishing a new AccountBalance message on another Kafka topic.

请注意,每种实体类型都由两个不同的类表示:

  • one is generated by Apache Avro and it's used for serialization and deserialization (so they can be sent and received from Kafka) → see avro directory.
  • the other one is a POJO which may contain some convenience constructors and does not depend on Avro → see net.devaction.entity package.

The net.devaction.kafka.avro.util package holds converters to move back and forth from one data representation to the other.

In the beginning, Apache Kafka may seem overwhelming, even though it resembles a classic messaging broker such as ActiveMQ or RabbitMQ, it is much more than that and it works very differently internally.
Also, there are several Kafka client APIs, which adds more confusion to the learner.
We are going to focus on the following three:

生产者API和消费者API的级别较低,并且Streams API建立在它们之上。 两组API都有优点和缺点。 Producer / Consumer API以更高的复杂性为代价,为应用程序开发人员提供了更好的控制。 另一方面,Streams API并不那么灵活,但是它允许更轻松地实现某些标准操作,并且所需的代码少得多。

“转移录音“ example / PoC服务可以通过以下两种模式之一启动:

两种模式提供了完全相同的功能,这对于比较而言非常方便。

The (explicit) polling mode

它具有四个主要组成部分:

  • A consumer which listens on the "transfers" topic → see TransferConsumer.java
  • A ReadOnlyKeyValueStore (which is part of the Streams API) to materialized the "account-balances" topic data into a queryable view, so we can use the accountId value to retrieve the latest/current balance of a specific account → see AccountBalanceRetrieverImpl.java. Please note that the accountId value is extracted from the "transfer" data message received by the consumer.
  • The business logic which creates a new/updated AccountBalanceEntity object from the received TransferEntity object and the current AccountBalanceEntity present in Kafka → see NewAccountBalanceProvider.java
  • A producer which publishes the updated balance by sending a message to the "account-balances" topic → and the local data store will get updated accordingly.

The "join streams" mode

如前所述,第二种操作模式仅使用Streams API / DSL,并利用它,我们可以在更高的抽象级别进行编码:

我们可以看到该代码比以前的模式紧凑得多。 我们不需要明确地映射KStream关键桌子关键,那正是加入(请参见下面的代码段中的第11行)。因此,我们需要相应地选择Kafka键。在这种情况下,两个消息键都代表帐户ID。

Diagram

Image depicting the diagram of the example/PoC Kafka Streams application


Diagram of the example/PoC Kafka Streams application

Running the code

To build and run the PoC application, in addition to Maven and Java, we also need a Kafka broker.
I decided to install the Confluent Platform which includes a Kafka broker (or a cluster depending on the configuration chosen) with some example topics and pre-configured integration with ElasticSearch and Kibana. But more importantly, it also includes an admin Web UI called Control Center which comes in very handy.

I hit a few bumps when running the Confluent Platform for the first time on my Fedora 30 computer.
Namely, I had to manually install a couple of software packages (i.e., "jot" and "jq").
And I had to separately install the Confluent CL一世.
I also had to perform some several changes to some properties files and bash scripts to be able to run the Confluent Platform using a non-root user, here are the changes, please modify the configuration values as per your environment.

Watch the following YouTube video to get all the details including starting the Confluent Platform and running the example Streams application:

from: https://dev.to//victorgil/using-apache-kafka-to-implement-event-driven-microservices-af2

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值