OpenShift 4 之AMQ Streams(1) - 多个Consumer从Partition接收数据

7 篇文章 0 订阅

OpenShift 4.x HOL教程汇总

AMQ Streams是什么?

Red Hat AMQ Stream是红帽基于社区版Kafka提供的软件订阅。它提供了所有Kafka的功能,同时又可以和红帽其他软件能更好的集成使用。在OpenShift中我们使用AMQ Stream Operator来构建并维护AMQ Stream的容器化运行环境。

安装 AMQ Streams 环境

安装AMQ Streams Operator

  1. 创建kafka项目
$ oc new-project kafka
  1. 使用缺省配制在kafka项目中安装"Red Hat Integration - AMQ Streams"的Operator,成功后可在Installed Operators中看到“Red Hat Integration - AMQ Streams”,并且还可执行命令确认pod和api-resources资源状态。
$ oc get pod -n kafka
NAME                                                   READY   STATUS    RESTARTS   AGE
amq-streams-cluster-operator-v1.4.0-59c7778c88-7bvzx   1/1     Running   0          22s
 
$ oc api-resources --api-group='kafka.strimzi.io'
NAME                 SHORTNAMES   APIGROUP           NAMESPACED   KIND
kafkabridges         kb           kafka.strimzi.io   true         KafkaBridge
kafkaconnectors      kctr         kafka.strimzi.io   true         KafkaConnector
kafkaconnects        kc           kafka.strimzi.io   true         KafkaConnect
kafkaconnects2is     kcs2i        kafka.strimzi.io   true         KafkaConnectS2I
kafkamirrormaker2s   kmm2         kafka.strimzi.io   true         KafkaMirrorMaker2
kafkamirrormakers    kmm          kafka.strimzi.io   true         KafkaMirrorMaker
kafkas               k            kafka.strimzi.io   true         Kafka
kafkatopics          kt           kafka.strimzi.io   true         KafkaTopic
kafkausers           ku           kafka.strimzi.io   true         KafkaUser

创建Kafka集群

  1. 创建以下内容的kafka.yaml文件,其中定义了Kafka集群为3副本,Zookeeper集群也为3副本。
apiVersion: kafka.strimzi.io/v1alpha1
kind: Kafka
metadata:
  name: my-cluster
  labels:
    app: my-cluster
spec:
  kafka:
    replicas: 3
    listeners:
      plain: {}
    configuration:
      auto.create.topics.enable: false
    storage:
      type: ephemeral
    resources:
      requests:
        memory: 512Mi
        cpu: 500m
      limits:
        memory: 2Gi
        cpu: 700m
    readinessProbe:
      initialDelaySeconds: 60
      timeoutSeconds: 5
    livenessProbe:
      initialDelaySeconds: 60
      timeoutSeconds: 5
  zookeeper:
    replicas: 3
    storage:
      type: ephemeral
    resources:
      requests:
        memory: 512Mi
        cpu: 200m
      limits:
        memory: 2Gi
        cpu: 500m
  entityOperator:
    topicOperator:
      resources:
        requests:
          memory: 512Mi
          cpu: 200m
        limits:
          memory: 2Gi
          cpu: 500m
    userOperator:
      resources:
        requests:
          memory: 512Mi
          cpu: 200m
        limits:
          memory: 2Gi
          cpu: 500m
  1. 执行命令创建名为my-cluster的Kafka集群,然后查看资源状态。
$ oc apply -f kafka-persistent.yaml -n kafka
kafka.kafka.strimzi.io/my-cluster created
 
$ oc get kafka  -n kafka
NAME         DESIRED KAFKA REPLICAS   DESIRED ZK REPLICAS
my-cluster   3                        3
 
$ oc get pod -n kafka
NAME                                                   READY   STATUS    RESTARTS   AGE
amq-streams-cluster-operator-v1.4.0-59c7778c88-7bvzx   1/1     Running   0          23m
my-cluster-entity-operator-c4cfc5695-zm5m7             3/3     Running   0          2s
my-cluster-kafka-0                                     1/2     Running   0          61s
my-cluster-kafka-1                                     1/2     Running   0          61s
my-cluster-kafka-2                                     1/2     Running   1          61s
my-cluster-zookeeper-0                                 2/2     Running   0          94s
my-cluster-zookeeper-1                                 2/2     Running   0          94s
my-cluster-zookeeper-2                                 2/2     Running   0          94s

运行Topic应用:Hello World

创建Topic

  1. 创建以下内容的my-topic.yaml,其中定义了名为my-topic的KafkaTopic,它的分区为3,副本为2。
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaTopic
metadata:
  name: my-topic
  labels:
    strimzi.io/cluster: my-cluster
spec:
  partitions: 3
  replicas: 2
  config:
    retention.ms: 7200000
    segment.bytes: 1073741824
  1. 执行命令创建Topic,然后查看资源状态。
$ oc apply -f my-topic.yaml -n kafka
kafkatopic.kafka.strimzi.io/my-topic created
 
$ oc get kafkatopic -n kafka
NAME       PARTITIONS   REPLICATION FACTOR
my-topic   3            2

发送接收消息

  1. 创建内容如下的kafka-producer.yaml文件。
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: kafka-producer
  name: kafka-producer
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: kafka-producer
    spec:
      containers:
      - name: kafka-producer
        image: strimzi/hello-world-producer:latest
        resources:
          limits:
            cpu: "2"
            memory: 2Gi
          requests:
            cpu: "1"
            memory: 1Gi
        env:
          - name: BOOTSTRAP_SERVERS
            value: my-cluster-kafka-bootstrap:9092
          - name: TOPIC
            value: my-topic
          - name: DELAY_MS
            value: "1000"
          - name: LOG_LEVEL
            value: "INFO"
          - name: MESSAGE_COUNT
            value: "1000"
  1. 执行命令,部署运行接收消息的kafka-consumer应用,然后查看运行情况。
$ oc apply -f kafka-producer.yaml -n kafka
deployment.extensions/kafka-producer created
 
$ oc get pod -l app=kafka-producer -n kafka
NAME                              READY   STATUS    RESTARTS   AGE
kafka-producer-84779c5f86-9kdk4   1/1     Running   0          27s
  1. 在OpenShift控制台的Workloads -> Pods找到kafka-producer-84779c5f86-9kdk4,然后查看Logs。
    在这里插入图片描述
  2. 创建内容如下的kafka-consumer.yaml文件。
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: kafka-consumer
  name: kafka-consumer
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: kafka-consumer
    spec:
      containers:
      - name: kafka-consumer
        image: strimzi/hello-world-consumer:latest
        resources:
          limits:
            cpu: "2"
            memory: 2Gi
          requests:
            cpu: "1"
            memory: 1Gi
        env:
          - name: BOOTSTRAP_SERVERS
            value: my-cluster-kafka-bootstrap:9092
          - name: TOPIC
            value: my-topic
          - name: GROUP_ID
            value: my-hello-world-consumer
          - name: LOG_LEVEL
            value: "INFO"
          - name: MESSAGE_COUNT
            value: "1000"
  1. 执行命令,部署运行接收消息的kafka-consumer应用,然后查看运行情况。
$ oc apply -f kafka-consumer.yaml -n kafka
deployment.extensions/kafka-consumer created
 
$ oc get pod -l app=kafka-consumer -n kafka
NAME                              READY   STATUS    RESTARTS   AGE
kafka-consumer-84479f749c-2zbt2   1/1     Running   0          5m55s
  1. 查看kafka-consumer应用的输出日志。可以确认测试唯一一个kafka-consumer可以从编号0/1/2的三个partition接收数据。
    在这里插入图片描述
  2. 执行命令,将运行kafka-consumer的Pod数增加到2个。
$ oc scale deployment kafka-consumer --replicas=2 -n kafka
 
$ oc get pod -l app=kafka-consumer -n kafka
NAME                              READY   STATUS    RESTARTS   AGE
kafka-consumer-84479f749c-2zbt2   1/1     Running   3          149m
kafka-consumer-84479f749c-ctd9q   1/1     Running   0          1m9s
  1. 在OpenShift控制台确认以上2个运行kafka-consumer的Pod的日志。根据partition编号确认其中一个Pod只能从1个partition接收数据,另一个Pod能从2个partition接收数据。
    在这里插入图片描述
    在这里插入图片描述
  2. 执行命令,将运行kafka-consumer的Pod再次数降为1。然后再次从Pod日志确认它可从3个partition接收到数据。
$ oc scale deployment kafka-consumer --replicas=1 -n kafka
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值