先准备一个文件里面数据有:
a, 1547718199, 1000000 b, 1547718200, 1000000 c, 1547718201, 1000000 d, 1547718202, 1000000 e, 1547718203, 1000000 f, 1547718204, 1000000 g, 1547718205, 1000000 h, 1547718210, 1000000 i, 1547718210, 1000000 j, 1547718210, 1000000
scala代码:
import java.sql.{Connection, DriverManager, PreparedStatement}
import org.apache.flink.configuration.Configuration
import org.apache.flink.streaming.api.functions.sink.{RichSinkFunction, SinkFunction}
import org.apache.flink.streaming.api.scala._
case class SensorReading(name: String, timestamp: Long, salary: Double)
object a1 {
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setParallelism(1)
//数据源
val dataStream: DataStream[String] = env.readTextFile("D:\\wlf.备份24.1.3\\wlf\\ideaProgram\\bbbbbb\\src\\main\\resources\\salary.t

本文介绍了一个使用Scala编写的Flink流处理程序,从CSV文件读取数据,转换为SensorReading对象,然后通过JDBCSink将数据实时插入或更新到MySQL数据库。
最低0.47元/天 解锁文章
7906

被折叠的 条评论
为什么被折叠?



