之前调研打算使用Spark Streaming来消费阿里云日志服务SLS的日志,具体的架构实现见之前的博客,大概流程就是使用Flume去收集阿里云日志服务SLS的数据,把采集到的数据sink到Kafka,最后Spark Streaming来消费。
咋一看上面整个链条太长,其实也是可以直接使用Spark Streaming来消费阿里云日志服务SLS的数据,但个人感觉自从阿里云收购了Flink之后,对Spark的拥抱就没那么好,之前也看过阿里云EMR的产品,包括阿里云关于EMR的钉钉群,活跃度也是很低的。而Spark Streaming集成Sls的代码其实是集成到EMR那里的,所以就直接去Github上一搜一看,放弃了。思来想去,虽然公司已经在使用阿里云的实时计算这个产品,公司项目大部分的实时计算需求都是用的阿里云的这个实时计算产品,而这些需求都是我一个人来做,怎么说呢,这个产品比较好的地方就是纯写SQL就能解决很多需求了,只要你了解一点Flink,知道怎么写SQL,那基本上,简单的需求就能在这个实现。但是不好的地方就是对于复杂的需求,单纯使用SQL还是很吃力的,比如说求30日留存,其实也是可以使用SQL来解决,但费了老大的劲了。还有就是成本大高,具体你可以看一下收费,前期我们用的独享,但是项目效果不佳,又重新撤回到共享,后来又上了一个新项目,时间比较急,又重新使用的独享,这里面踩了不少坑,以后可以分享一波。
基于上面所说的原因,开始调研使用开源的Flink直接去消费阿里云日志服务SLS的数据。首先,查看阿里云官网文档,如下
https://help.aliyun.com/document_detail/63594.html?spm=a2c4g.11186623.6.934.362c5af1kYw9J4
把上面的代码复制粘贴到IDEA,代码如下
package com.flink;
import com.aliyun.openservices.log.flink.ConfigConstants;
import com.aliyun.openservices.log.flink.FlinkLogConsumer;
import com.aliyun.openservices.log.flink.data.RawLogGroupList;
import com.aliyun.openservices.log.flink.data.RawLogGroupListDeserializer;
import com.aliyun.openservices.log.flink.util.Consts;
import org.apache.flink.streaming.api.CheckpointingMode;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import java.util.Properties;
public class SlsFlinkConsumerApp {
public static void main(String[] args) throws Exception {
Properties configProps = new Properties();
// 设置访问日志服务的域名
configProps.put(ConfigConstants.LOG_ENDPOINT, "cn-shenzhen.log.aliyuncs.com");
// 设置访问ak
configProps.put(ConfigConstants.LOG_ACCESSSKEYID, "<Your Accesss Key Id>");
configProps.put(ConfigConstants.LOG_ACCESSKEY, "<Your Access Key>");
// 设置日志服务的project
configProps.put(ConfigConstants.LOG_PROJECT, "<Your Loghub project>");
// 设置日志服务的Logstore
configProps.put(ConfigConstants.LOG_LOGSTORE, "<Your Loghub logstore>");
// 设置消费日志服务起始位置
configProps.put(ConfigConstants.LOG_CONSUMER_BEGIN_POSITION, Consts.LOG_END_CURSOR);
configProps.put(ConfigConstants.LOG_FETCH_DATA_INTERVAL_MILLIS, "100");
configProps.put(ConfigConstants.LOG_MAX_NUMBER_PER_FETCH, "100");
configProps.put(ConfigConstants.LOG_SHARDS_DISCOVERY_INTERVAL_MILLIS, "30000");
configProps.put(ConfigConstants.LOG_CONSUMER_BEGIN_POSITION, Consts.LOG_END_CURSOR);
// 设置日志服务的消息反序列化方法
RawLogGroupListDeserializer deserializer = new RawLogGroupListDeserializer();
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
// 开启flink exactly once语义
env.getCheckpointConfig().setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE);
// 每5s保存一次checkpoint
env.enableCheckpointing(5000);
DataStream<RawLogGroupList> logTestStream = env.addSource(
new FlinkLogConsumer<RawLogGroupList>(deserializer, configProps)
);
logTestStream.
print();
env.execute("SlsFlinkConsumerApp");
}
}
我的pom文件
<dependencies>
<!-- Apache Flink dependencies -->
<!-- These dependencies are provided, because they should not be packaged into the JAR file. -->
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-scala_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-scala_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
<scope>provided</scope>
</dependency>
<!-- Scala Library, provided by Flink as well. -->
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-java</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients_2.11</artifactId>
<version>${flink.version}</version>
</dependency>
<!--aliyun-->
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java_2.11</artifactId>
<version>1.3.2</version>
</dependency>
<dependency>
<groupId>com.aliyun.openservices</groupId>
<artifactId>flink-log-connector</artifactId>
<version>0.1.7</version>
</dependency>
<dependency>
<groupId>com.google.protobuf</groupId>
<artifactId>protobuf-java</artifactId>
<version>2.5.0</version>
</dependency>
<dependency>
<groupId>com.aliyun.openservices</groupId>
<artifactId>aliyun-log</artifactId>
<version>0.6.19</version>
</dependency>
<dependency>
<groupId>com.aliyun.openservices</groupId>
<artifactId>log-loghub-producer</artifactId>
<version>0.1.8</version>
</dependency>
<!-- Add connector dependencies here. They must be in the default scope (compile). -->
<!-- Example:
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka-0.10_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
</dependency>
-->
<!-- Add logging framework, to produce console output when running in the IDE. -->
<!-- These dependencies are excluded from the application JAR by default. -->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.7</version>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
<scope>runtime</scope>
</dependency>
</dependencies>
好了,直接运行吧
成功!!!