Kafka Bridge的功能
我们可以使用Kafka Bridge将内部和外部HTTP客户端与您Kafka集群集成在一起。
- 内部客户端是与Kafka Bridge本身在同一Kubernetes集群中运行的基于容器的HTTP客户端。内部客户端可以访问KafkaBridge定制资源中定义的主机和端口上的Kafka Bridge。
- 外部客户端是在Kubernetes群集之外运行并部署Kafka Bridge的HTTP客户端。外部客户端可以通过OpenShift的Route,LoadBalance服务或K8s的Ingress访问KafkaBridge。
配置Kafka Bridge
本文的操作是《OpenShift 4 之部署Strimzi Operator运行Kafka应用》
- 在OpenShift Console上的Strimzi Operator中使用缺省配置创建一个Kafka Bridge,然后查看KafkaBridge对象。
$ oc get KafkaBridge
NAME DESIRED REPLICAS
my-bridge 1
- 根据Kafka Bridge的Service生成Route。除了OpenShift Route外,还可将Kafka Bridge的Service通过LoadBalance或Ingress对外提供访问入口。
$ oc expose svc my-bridge-bridge-service
route.route.openshift.io/my-bridge-bridge-service exposed
测试验证Kafka Bridge
发消息
- 设置KAFKA_BRIDGE和KAFKA_TOPIC环境变量。
$ KAFKA_BRIDGE=$(oc get route my-bridge-bridge-service -o template --template '{{.spec.host}}')
$ KAFKA_TOPIC=my-topic
- 执行命令,确认kafka Bridge状态正常(返回“HTTP/1.1 200 OK”的结果)。
$ curl -v http://$KAFKA_BRIDGE/healthy
** About to connect() to my-bridge-bridge-service-kafka.apps.cluster-anhui-582d.anhui-582d.example.opentlc.com port 80 (#0)
** Trying 52.204.55.24...
** Connected to my-bridge-bridge-service-kafka.apps.cluster-anhui-582d.anhui-582d.example.opentlc.com (52.204.55.24) port 80 (#0)
> GET /healthy HTTP/1.1
> User-Agent: curl/7.29.0
> Host: my-bridge-bridge-service-kafka.apps.cluster-anhui-582d.anhui-582d.example.opentlc.com
> Accept: */*
>
< HTTP/1.1 200 OK
< content-length: 0
< Set-Cookie: 93b1d08256cbf837e3463c0bba903028=0da6748548970e857b22c45232a307b1; path=/; HttpOnly
< Cache-control: private
<
** Connection #0 to host my-bridge-bridge-service-kafka.apps.cluster-anhui-582d.anhui-582d.example.opentlc.com left intact
- 执行命令,通过curl向Kafka Topic发送测试消息。
$ curl -X POST \
http://$KAFKA_BRIDGE/topics/$KAFKA_TOPIC \
-H 'content-type: application/vnd.kafka.json.v2+json' \
-d '{
"records": [
{
"key": "key-1",
"value": "value-1"
},
{
"key": "key-2",
"value": "value-2"
}
]
}'
- 运行Kafka接收程序,验证可以收到测试消息(“value-1"和"value-2”)。
$ KAFKA_TOPIC=${1:-'my-topic'}
$ KAFKA_CLUSTER_NS=${2:-'kafka'}
$ KAFKA_CLUSTER_NAME=${3:-'my-cluster'}
$ oc -n $KAFKA_CLUSTER_NS run kafka-consumer -ti \
--image=strimzi/kafka:0.15.0-kafka-2.3.1 \
--rm=true --restart=Never \
-- bin/kafka-console-consumer.sh \
--bootstrap-server $KAFKA_CLUSTER_NAME-$KAFKA_CLUSTER_NS-bootstrap:9092 \
--topic $KAFKA_TOPIC --from-beginning
"value-1"
"value-2"
注意:如果运行后报“Error from server (AlreadyExists): pods “kafka-consumer” already exists”,则需要先手动删除名为kafka-consumer的Pod,然后再运行。
收消息
为了实现通过Kafka Bridge使用HTTP方式接收Kafka Topic的消息,我们需要先通过Consumer Group订阅消息。
- 设置KAFKA_CONSUMER_GROUP环境变量。
$ KAFKA_BRIDGE=$(oc get route my-bridge-bridge-service -o template --template '{{.spec.host}}')
$ KAFKA_CONSUMER_GROUP=my-group
- 创建一个名为my-consumer的Consumer,并将其加入到名为my-group的Consumer Group。
$ curl -X POST http://$KAFKA_BRIDGE/consumers/$KAFKA_CONSUMER_GROUP \
-H 'content-type: application/vnd.kafka.v2+json' \
-d '{
"name": "my-consumer",
"format": "json",
"auto.offset.reset": "earliest",
"enable.auto.commit": false
}'
成功运行结果是:
{
"instance_id":"my-consumer",
"base_uri":"http://my-bridge-bridge-service-kafka.apps.cluster-anhui-582d.anhui-582d.example.opentlc.com:80/consumers/my-group/instances/my-consumer"
}
- 执行命令为my-consumer的Consume配置订阅,订阅目标是名为my-topic的Kafka Topic。如果运行没有返回错误即设置成功。
$ curl -X POST http://$KAFKA_BRIDGE/consumers/$KAFKA_CONSUMER_GROUP/instances/my-consumer/subscription \
-H 'content-type: application/vnd.kafka.v2+json' \
-d '{
"topics": [
"my-topic"
]
}'
- 用以上任一种方法向my-topic中顺序发送测试字符:1,2,3,4。
- 运行命令,通过Kafka的my-consumer获得my-topic中的测试消息。
$ curl -X GET http://$KAFKA_BRIDGE/consumers/$KAFKA_CONSUMER_GROUP/instances/my-consumer/records \
-H 'accept: application/vnd.kafka.json.v2+json'
获得的执行结果(已经部分格式化):
[
{"topic":"my-topic","key":null,"value":3,"partition":8,"offset":0},
{"topic":"my-topic","key":null,"value":4,"partition":9,"offset":0},
{"topic":"my-topic","key":null,"value":1,"partition":2,"offset":0},
{"topic":"my-topic","key":null,"value":2,"partition":3,"offset":0}
]