![](https://img-blog.csdnimg.cn/img_convert/a56eb36105937838da90da844546abe9.png)
1.Environment
1.1 getExecutionEnvironment
创建一个执行环境,表示当前执行程序的上下文。 如果程序是独立调用的,则此方法返回本地执行环境;如果从命令行客户端调用程序以提交到集群,则此方法返回此集群的执行环境,也就是说,getExecutionEnvironment 会根据查询运行的方式决定返回什么样的运行环境,是最常用的一种创建执行环境的方式。
val env: ExecutionEnvironment = ExecutionEnvironment.getExecutionEnvironment
val env = StreamExecutionEnvironment.getExecutionEnvironment
如果没有设置并行度,会以 flink-conf.yaml 中的配置为准,默认是 1。
parallelism.default: 1
1.2 createLocalEnvironment
返回本地执行环境,需要在调用时指定默认的并行度。
1.3 createRemoteEnvironment
返回集群执行环境,将 Jar 提交到远程服务器。需要在调用时指定 JobManager的 IP 和端口号,并指定要在集群中运行的 Jar 包。
val env = ExecutionEnvironment.createRemoteEnvironment(“jobmanage-hostname",6123,"YOURPATH//wordcount.jar")
2.Source
2.1从集合读取数据
//定义样例类,传感器ID,时间戳,温度
case class SensorReading(id: String, timestamp: Long, temperature: Double)
// 创建执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
// 1.从集合中读取数据
val dataList:List[SensorReading] = List(
SensorReading("sensor_1", 1547718199, 35.8),
SensorReading("sensor_6", 1547718201, 15.4),
SensorReading("sensor_7", 1547718202, 6.7),
SensorReading("sensor_10", 1547718205, 38.1)
)
val stream1 = env.fromCollection(dataList)
stream1.print("stream1:")//.setParallelism(1)
2.2从文本读取数据
// 2.从文件中读取
val filePath = "/Users/FengZhen/Desktop/accumulate/0_project/flink_learn/src/main/resources/data/sensor.txt"
val stream2 = env.readTextFile(filePath)
stream2.print("stream2:")
2.3以kafka消息队列的数据作为来源
// 3.从kafka中读取数据
// ./kafka-console-producer.sh --broker-list localhost:9092 --topic topic_sensor
val properties = new Properties()
properties.setProperty("bootstrap.servers", "localhost:9092")
properties.setProperty("group.id", "consumer-group")
properties.setProperty("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
properties.setProperty("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
properties.setProperty("auto.offset.reset", "latest")
val streamKafka = env.addSource( new FlinkKafkaConsumer[String](
"topic_sensor",
new SimpleStringSchema(),
properties
))
streamKafka.print("streamKafka:")
2.4自定义source
除了以上的 source 数据来源,我们还可以自定义 source。需要做的,只是传入一个 SourceFunction 就可以。具体调用如下:
// 4.自定义source
val streamSelfSource = env.addSource(new MySensorSource())
streamSelfSource.print("streamSelfSource:")
env.execute("source test")
class MySensorSource extends SourceFunction[SensorReading]{
//flag: 表示数据源是否还在正常运行
var running: Boolean = true
override def run(ctx: SourceFunction.SourceContext[SensorReading]): Unit = {
//初始化一个随机数发生器
val rand = new Random()
var curTemp = 1.to(10).map(
i => ("sensor_" + i, rand.nextDouble() * 100)
)
//定义无限循环,不停地产生数据,除非被cancel
while (running){
//更新温度值
curTemp = curTemp.map(
t => (t._1, t._2 + rand.nextGaussian())
)
//获取当前时间戳
val curTime = System.currentTimeMillis()
//调用ctx.collect发出数据
curTemp.foreach(
t => ctx.collect(SensorReading(t._1, curTime, t._2))
)
Thread.sleep(100)
}
}
override def cancel(): Unit = {
running = false
}
}