FlinkCDC 2.1.1记录

Flink CDC 2.0 数据处理流程全面解析

https://cloud.tencent.com/developer/article/1899520

Mysql
MySqlSource<String> mySqlSource = MySqlSource.<String>builder()
                .hostname("localhost")
                .port(3306)
                .databaseList("test") // monitor all tables under inventory database
                .tableList("test.nt_eau_switch, test.test")
                .username("root")
                .password("123456")
                .deserializer(new MysqlStringDeserializationSchema()) // 自定义:converts SourceRecord to JSON String
                .build();

DataStream<String> cdcstream = env.fromSource(mySqlSource, WatermarkStrategy.noWatermarks(), "Mysql业务数据源");
 //注释tableList报以下错误...
.tableList("test.nt_eau_switch, test.test")

15:49:32.130 [flink-akka.actor.default-dispatcher-2] ERROR org.apache.flink.runtime.util.FatalExitExceptionHandler - FATAL: Thread 'flink-akka.actor.default-dispatcher-2' produced an uncaught exception. Stopping the process...
java.util.concurrent.CompletionException: org.apache.flink.runtime.rpc.akka.exceptions.AkkaRpcException: Could not start RpcEndpoint jobmanager_3.
	at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292) ~[?:1.8.0_152]
	at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308) ~[?:1.8.0_152]
	at java.util.concurrent.CompletableFuture.uniRun(CompletableFuture.java:700) ~[?:1.8.0_152]
	at java.util.concurrent.CompletableFuture$UniRun.tryFire(CompletableFuture.java:687) ~[?:1.8.0_152]
	at java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:720) ~[?:1.8.0_152]
	at java.util.concurrent.CompletableFuture.thenRunAsync(CompletableFuture.java:2019) ~[?:1.8.0_152]
	at org.apache.flink.runtime.dispatcher.Dispatcher.removeJob(Dispatcher.java:737) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$runJob$4(Dispatcher.java:425) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602) ~[?:1.8.0_152]
	at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577) ~[?:1.8.0_152]
	at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474) ~[?:1.8.0_152]
	at java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561) ~[?:1.8.0_152]
	at java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:800) ~[?:1.8.0_152]
	at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442) ~[?:1.8.0_152]
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:440) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:208) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:77) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) [akka-actor_2.11-2.5.21.jar:2.5.21]
	at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) [akka-actor_2.11-2.5.21.jar:2.5.21]
	at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) [scala-library-2.11.12.jar:?]
	at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) [akka-actor_2.11-2.5.21.jar:2.5.21]
	at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170) [scala-library-2.11.12.jar:?]
	at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) [scala-library-2.11.12.jar:?]
	at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) [scala-library-2.11.12.jar:?]
	at akka.actor.Actor$class.aroundReceive(Actor.scala:517) [akka-actor_2.11-2.5.21.jar:2.5.21]
	at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225) [akka-actor_2.11-2.5.21.jar:2.5.21]
	at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) [akka-actor_2.11-2.5.21.jar:2.5.21]
	at akka.actor.ActorCell.invoke(ActorCell.scala:561) [akka-actor_2.11-2.5.21.jar:2.5.21]
	at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) [akka-actor_2.11-2.5.21.jar:2.5.21]
	at akka.dispatch.Mailbox.run(Mailbox.scala:225) [akka-actor_2.11-2.5.21.jar:2.5.21]
	at akka.dispatch.Mailbox.exec(Mailbox.scala:235) [akka-actor_2.11-2.5.21.jar:2.5.21]
	at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) [akka-actor_2.11-2.5.21.jar:2.5.21]
	at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) [akka-actor_2.11-2.5.21.jar:2.5.21]
	at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) [akka-actor_2.11-2.5.21.jar:2.5.21]
	at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) [akka-actor_2.11-2.5.21.jar:2.5.21]
Caused by: org.apache.flink.runtime.rpc.akka.exceptions.AkkaRpcException: Could not start RpcEndpoint jobmanager_3.
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:610) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleControlMessage(AkkaRpcActor.java:180) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
	at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
	at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) ~[scala-library-2.11.12.jar:?]
	at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
	at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170) ~[scala-library-2.11.12.jar:?]
	... 12 more
Caused by: org.apache.flink.runtime.jobmaster.JobMasterException: Could not start the JobMaster.
	at org.apache.flink.runtime.jobmaster.JobMaster.onStart(JobMaster.java:385) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:181) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:605) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleControlMessage(AkkaRpcActor.java:180) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
	at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
	at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) ~[scala-library-2.11.12.jar:?]
	at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
	at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170) ~[scala-library-2.11.12.jar:?]
	... 12 more
Caused by: org.apache.flink.util.FlinkRuntimeException: Failed to start the operator coordinators
	at org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startAllOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:90) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.runtime.scheduler.SchedulerBase.startScheduling(SchedulerBase.java:592) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.runtime.jobmaster.JobMaster.startScheduling(JobMaster.java:955) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.runtime.jobmaster.JobMaster.startJobExecution(JobMaster.java:873) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.runtime.jobmaster.JobMaster.onStart(JobMaster.java:383) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:181) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:605) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleControlMessage(AkkaRpcActor.java:180) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
	at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
	at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) ~[scala-library-2.11.12.jar:?]
	at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
	at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170) ~[scala-library-2.11.12.jar:?]
	... 12 more
Caused by: java.lang.NullPointerException
	at org.apache.flink.util.Preconditions.checkNotNull(Preconditions.java:59) ~[flink-core-1.13.2.jar:1.13.2]
	at com.ververica.cdc.connectors.mysql.source.config.MySqlSourceConfig.<init>(MySqlSourceConfig.java:91) ~[flink-sql-connector-mysql-cdc-2.1.1.jar:2.1.1]
	at com.ververica.cdc.connectors.mysql.source.config.MySqlSourceConfigFactory.createConfig(MySqlSourceConfigFactory.java:287) ~[flink-sql-connector-mysql-cdc-2.1.1.jar:2.1.1]
	at com.ververica.cdc.connectors.mysql.source.MySqlSource.createEnumerator(MySqlSource.java:153) ~[flink-sql-connector-mysql-cdc-2.1.1.jar:2.1.1]
	at org.apache.flink.runtime.source.coordinator.SourceCoordinator.start(SourceCoordinator.java:124) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator$DeferrableCoordinator.applyCall(RecreateOnResetOperatorCoordinator.java:291) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.runtime.operators.coordination.RecreateOnResetOperatorCoordinator.start(RecreateOnResetOperatorCoordinator.java:70) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.start(OperatorCoordinatorHolder.java:194) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startAllOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:85) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.runtime.scheduler.SchedulerBase.startScheduling(SchedulerBase.java:592) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.runtime.jobmaster.JobMaster.startScheduling(JobMaster.java:955) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.runtime.jobmaster.JobMaster.startJobExecution(JobMaster.java:873) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.runtime.jobmaster.JobMaster.onStart(JobMaster.java:383) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:181) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:605) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleControlMessage(AkkaRpcActor.java:180) ~[flink-runtime_2.11-1.13.2.jar:1.13.2]
	at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
	at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
	at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) ~[scala-library-2.11.12.jar:?]
	at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) ~[akka-actor_2.11-2.5.21.jar:2.5.21]
	at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170) ~[scala-library-2.11.12.jar:?]
	... 12 more

// 监控全数据库表
.tableList()
Mongodb
SourceFunction<String> sourceFunction = MongoDBSource.<String>builder()
                .hosts("hadoop03:30011")
                .username("root")
                .password("123456")
                .database("school")
                .collection("student")
                .deserializer(new MongodbStringDeserializationSchema())
                .build();

        DataStreamSource<String> mongodbCDCstream = env.addSource(sourceFunction);

String mongodb_ddl = "CREATE TABLE mongodb_sources_test ( " +
                " _id STRING, " +
                " name STRING, " +
                " age int, " +
                " PRIMARY KEY (_id) NOT ENFORCED" +
                ") WITH ( " +
                " 'connector' = 'mongodb-cdc', " +
                " 'hosts' = 'hadoop03:30010', " +
                " 'username' = 'root', " +
                " 'password' = '123456', " +
                " 'database' = 'school', " +
                " 'collection' = 'student'" +
                ")";

        tableEnv.executeSql(mysql_cdc);

        // kafka
        String kafka_sink_sql = "CREATE TABLE sink (" +
                " _id STRING, " +
                " name STRING, " +
                " age int " +
                ") WITH (" +
                "  'connector' = 'kafka'," +
                "  'topic' = 'cdc-test-1'," +
                "  'properties.bootstrap.servers' = 'hadoop01:9092'," +
                "  'format' = 'changelog-json' )";

        tableEnv.executeSql(kafka_sink_sql);

        tableEnv.executeSql("insert into sink select * from mongodb_sources_test");
FlinkCDC 记录
Flink SQL CDC 上线!我们总结了 13 条生产实践经验

https://blog.csdn.net/qq_31975963/article/details/108585043?utm_medium=distribute.pc_relevant.none-task-blog-2defaultbaidujs_baidulandingword~default-0.pc_relevant_paycolumn_v3&spm=1001.2101.3001.4242.1&utm_relevant_index=3

FlinkCDC Mongodb2Elasticsearch

https://ververica.github.io/flink-cdc-connectors/master/content/%E5%BF%AB%E9%80%9F%E4%B8%8A%E6%89%8B/mongodb-tutorial-zh.html?highlight=mongodb

flink-cdc写入hive,每日合并增量

https://www.csdn.net/tags/Ntzacg5sNjYzMjctYmxvZwO0O0OO0O0O.html

FlinkCDC 博客

https://ververica.github.io/flink-cdc-connectors/master/index.html

FlinkCDC 数据顺序性

在这里插入图片描述

Flink CDC 2.0 正式发布,详解核心改进

https://mp.weixin.qq.com/s?__biz=MzU3Mzg4OTMyNQ==&mid=2247493214&idx=1&sn=5c1add3c2ea15f300d8afca522bd77a0&chksm=fd38681cca4fe10a28540bbb22e11459a0332a0f6b3fc6a471811ecfc50919601ff88871816c&scene=21#wechat_redirect

问题记录

FlinkCDC在生产环境测试时一直报错没有MySQL的RELOAD权限

https://www.cnblogs.com/30go/p/15808632.html

CREATE USER 'bigdata'@'%' IDENTIFIED BY 'password';
GRANT SELECT, RELOAD, SHOW DATABASES, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'bigdata' IDENTIFIED BY 'password';
FLUSH PRIVILEGES;
  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值