OpenShift 4 - 用Debezium+Kafka实现MySQL数据库的CDC

3 篇文章 0 订阅
3 篇文章 0 订阅

OpenShift 4.x HOL教程汇总

场景说明

在这里插入图片描述

部署环境

执行命令创建项目

$ oc new-project debezium-cdc

安装 MySQL 环境

  1. 在 Terminal 1 中执行命令部署 MySQL。
$ oc new-app docker.io/debezium/example-mysql:latest -e MYSQL_ROOT_PASSWORD=debezium -e MYSQL_USER=mysqluser -e MYSQL_PASSWORD=mysqlpw -n debezium-cdc
  1. 进入运行 MySQL 的 Pod。
$ MYSQL_POD=$(oc get pod -l deployment=example-mysql -o jsonpath={.items[0].metadata.name} -n debezium-cdc)
  1. 用 mysqluser/mysqlpw 登录 MySQL,然后查看 customers 表中的数据。
$ oc exec $MYSQL_POD -it -- mysql -u mysqluser -pmysqlpw inventory
  
mysql> select * from customers;
+------+------------+-----------+-----------------------+
| id   | first_name | last_name | email                 |
+------+------------+-----------+-----------------------+
| 1001 | Sally      | Thomas    | sally.thomas@acme.com |
| 1002 | George     | Bailey    | gbailey@foobar.com    |
| 1003 | Edward     | Walker    | ed@walker.com         |
| 1004 | Anne       | Kretchmar | annek@noanswer.org    |
+------+------------+-----------+-----------------------+

安装 Kafka Operator

在 OpenShift 的控制台 Administrator 视图的 OperatorHub 中找到名为 Strimzi 或 AMQ Streams 的 Operator(Strimzi 是社区版 Kafka,AMQ Streams 是 RedHat 版的 Kafka),然后接受默认配置安装。
在这里插入图片描述

创建 Kafka 集群

  1. 安装完 Operator 后,进入 “开发者” 视图的 “+添加” 菜单。在右侧页面中点击 “添加” 下方的 “添加至项目” 图标,然后在搜索条中查找到 “kafka”,最后点击 “创建” 按钮。
    在这里插入图片描述

  2. 在 “创建 Kafka“ 页面中接受默认配置,然后点击 “创建” 按钮。在 Kafka 示例完成部署后拓扑界面如下:
    在这里插入图片描述

创建 Kafka Connect

  1. 在 “拓扑”页面中点击 ”添加至项目“ 图标,找到并选中 “Kafka Connect“ 后点击 “创建”。最后在 “创建 KafkaConnect” 页面使用以下配置 “创建” KafkaConnect。
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
  name: my-connect-cluster
  namespace: debezium-cdc
  annotations:
    strimzi.io/use-connector-resources: 'true'
spec:
  config:
    group.id: connect-cluster
    offset.storage.topic: connect-cluster-offsets
    config.storage.topic: connect-cluster-configs
    status.storage.topic: connect-cluster-status
    config.storage.replication.factor: -1
    offset.storage.replication.factor: -1
    status.storage.replication.factor: -1
  build:
    output:
      type: docker
      image: image-registry.openshift-image-registry.svc:5000/debezium-cdc/debezium-connect-mysql:latest
    plugins: 
      - name: debezium-connector-mysql
        artifacts:
          - type: zip
            url: https://maven.repository.redhat.com/ga/io/debezium/debezium-connector-mysql/1.9.5.Final-redhat-00001/debezium-connector-mysql-1.9.5.Final-redhat-00001-plugin.zip
  tls:
    trustedCertificates:
      - secretName: my-cluster-cluster-ca-cert
        certificate: ca.crt
  version: 3.1.0
  replicas: 1
  bootstrapServers: 'my-cluster-kafka-bootstrap:9093'
  1. 在 Terminal 2 执行命令查看名为 my-connect-cluster-connect-build-1 的 Build 执行情况,直到完成构建。另外还可查看生成的 ImageStream 对象和 KafkaConnect 对象。
$ oc get build -n debezium-cdc
NAME                                 TYPE     FROM         STATUS     STARTED       DURATION
my-connect-cluster-connect-build-1   Docker   Dockerfile   Complete   5 hours ago   51s
$ oc get is debezium-streams-connect -n debezium-cdc
NAME                       IMAGE REPOSITORY                                                                         TAGS     UPDATED
debezium-streams-connect   image-registry.openshift-image-registry.svc:5000/debezium-cdc/debezium-streams-connect   latest   5 hours ago

在这里插入图片描述

  1. (可选)在 Terminal 2 中执行命令,创建 Route。
$ oc expose svc/my-connect-cluster-connect-api -n debezium-cdc
$ CONNECT_API=$(oc get route my-connect-cluster-connect-api -o jsonpath='{ .spec.host }' -n debezium-cdc) 

创建 Kafka Connector

  1. 在 “拓扑”页面中点击 ”添加至项目“ 图标,找到并选中 “Kafka Connector“ 后点击 “创建”。最后在 “创建 KafkaConnector” 页面使用以下配置创建 KafkaConnector。
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnector
metadata:
  name: my-connector-mysql
  namespace: debezium-cdc
  labels:
    strimzi.io/cluster: my-connect-cluster
spec:
  class: io.debezium.connector.mysql.MySqlConnector
  tasksMax: 1
  config:
    tasks.max: 1
    database.hostname: example-mysql
    database.port: 3306
    database.user: debezium
    database.password: dbz
    database.server.id: 184054
    database.server.name: mysql
    database.include.list: inventory
    database.history.kafka.bootstrap.servers: my-cluster-kafka-bootstrap:9093
    database.history.kafka.topic: schema-changes.inventory
  1. 执行命令,查看 KafkaConnector 对象和 Connector 的状态。
$ oc get KafkaConnector -n debezium-cdc
NAME                 CLUSTER              CONNECTOR CLASS                              MAX TASKS   READY
my-connector-mysql   my-connect-cluster   io.debezium.connector.mysql.MySqlConnector   1           True
$ oc describe KafkaConnector my-connector-mysql -n debezium-cdc
。。。
Status:
  Conditions:
    Last Transition Time:  2022-08-13T10:28:02.625375Z
    Status:                True
    Type:                  Ready
  Connector Status:
    Connector:
      State:      RUNNING
      worker_id:  10.128.0.67:8083
    Name:         my-connector-mysql
    Tasks:
    Type:               source
  Observed Generation:  1
  Tasks Max:            1
  Topics:

测试

  1. 在Terminal2中部署应用,其可从Kafka接收捕获到的MySQL变化数据。
$ oc new-app quay.io/efeluzy/quarkus-kafka-consumer:latest -e mp.messaging.incoming.mytopic-subscriber.topic=mysql.inventory.customers -n debezium-cdc
$ oc expose svc quarkus-kafka-consumer -n debezium-cdc
$ oc get kafkatopic mysql.inventory.customers
NAME                        CLUSTER      PARTITIONS   REPLICATION FACTOR   READY
mysql.inventory.customers   my-cluster   1            3                    True
  1. 在Terminal2中执行命令访问测试应用,此时测试窗口处于“待命”状态。
$ curl http://$(oc get route quarkus-kafka-consumer -o jsonpath='{ .spec.host }' -n debezium-cdc)/stream
  1. 在Terminal1中的MySQL中执行updata更新customers表中主键为“1003”的数据。
mysql> update customers set first_name='Test' where id = 1003;
Query OK, 1 row affected (0.01 sec)
Rows matched: 1  Changed: 1  Warnings: 0
  1. 在Terminal2中确认测试应用可以从Kafka接收到Debezium从MySQL捕获到的变化数据(注意:输出中内容为“payload”部分的“before”和“after”,以及从Debezium的“dbserver1.inventory.customers.Envelope”获取的数据)。
data: Kafka Offset=4; message={"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"first_name"},{"type":"string","optional":false,"field":"last_name"},{"type":"string","optional":false,"field":"email"}],"optional":true,"name":"dbserver1.inventory.customers.Value","field":"before"},{"type":"struct","fields":[{"type":"int32","optional":false,"field":"id"},{"type":"string","optional":false,"field":"first_name"},{"type":"string","optional":false,"field":"last_name"},{"type":"string","optional":false,"field":"email"}],"optional":true,"name":"dbserver1.inventory.customers.Value","field":"after"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type":"string","optional":false,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"ts_ms"},{"type":"string","optional":true,"name":"io.debezium.data.Enum","version":1,"parameters":{"allowed":"true,last,false"},"default":"false","field":"snapshot"},{"type":"string","optional":false,"field":"db"},{"type":"string","optional":true,"field":"table"},{"type":"int64","optional":false,"field":"server_id"},{"type":"string","optional":true,"field":"gtid"},{"type":"string","optional":false,"field":"file"},{"type":"int64","optional":false,"field":"pos"},{"type":"int32","optional":false,"field":"row"},{"type":"int64","optional":true,"field":"thread"},{"type":"string","optional":true,"field":"query"}],"optional":false,"name":"io.debezium.connector.mysql.Source","field":"source"},{"type":"string","optional":false,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"},{"type":"int64","optional":false,"field":"total_order"},{"type":"int64","optional":false,"field":"data_collection_order"}],"optional":true,"field":"transaction"}],"optional":false,"name":"dbserver1.inventory.customers.Envelope"},"payload":{"before":{"id":1003,"first_name":"Edward","last_name":"Walker","email":"ed@walker.com"},"after":{"id":1003,"first_name":"erfin","last_name":"Walker","email":"ed@walker.com"},"source":{"version":"1.1.2.Final","connector":"mysql","name":"dbserver1","ts_ms":1595073286000,"snapshot":"false","db":"inventory","table":"customers","server_id":223344,"gtid":null,"file":"mysql-bin.000003","pos":364,"row":0,"thread":9,"query":null},"op":"u","ts_ms":1595073286806,"transaction":null}}

参考

https://debezium.io/documentation/reference/stable/operations/openshift.html
https://debezium.io/documentation/reference/1.9/connectors/mysql.html#mysql-connector-properties
https://access.redhat.com/documentation/en-us/red_hat_integration/2022.q3/html/getting_started_with_debezium/starting-services

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值