Flume:sink.type=hive

                      Flume以Kafka为Source,以Hive为Sink进行数据转存。

业务背景:公司要求将某四川城市的卡口数据实时接入大数据平台中,历史数据可以通过Hive进行Load,也就是增量数据的对接问题。现场设备采集卡口的数据量在400万左右,不多。设备数据采集后由数据对接人员塞到Kafak中。

思路:由Flume读取Kafka中的原数据,可以直接存入Hive中,也可以写入HDFS,再由Hive外部表加载。由于第一种不需要开发代码,只需配置,故采用了第一种。

常见问题处理 :

1、缺少jar包,特别是hcatalog,antlr-runtime-3.4等;

2、batchSize,消费能力要合适Channel,不然会一直报错Failed;

3、Hive建表时需要配置事务,表名小写,这一类错误报错明显,可以相应改正  ;

4、Hive表中是否有数据,不能用“show create table”,直接看select

 

配置代码如下:

PS:

1、分区问题:不能直接使用Event Header中的TimeStamp,因为考虑到会有一定的延时,处于时间分界时段的数据会分区错误。需regex_extractor解析Body,获取PassTime字段,加入Header,以此分区。

2、过滤问题:某些数据车牌未正确识别,需过滤,使用拦截器。正则表达式使用 | 进行拼接。

 

server.sources = test_source
server.channels = test_channel
server.sinks = test_sink


# the source configuration of test_source
server.sources.test_source.type = org.apache.flume.source.kafka.KafkaSource
server.sources.test_source.kafka.topics = kakoudata
server.sources.test_source.kafka.consumer.group.id = groupj
server.sources.test_source.kafka.security.protocol = PLAINTEXT
server.sources.test_source.kafka.auto.offset.reset = smallest
server.sources.test_source.batchDurationMillis = 1000
server.sources.test_source.batchSize = 1000
server.sources.test_source.channels = test_channel
server.sources.test_source.interceptors = i1 i2

server.sources.test_source.interceptors.i1.type = regex_filter
server.sources.test_source.interceptors.i1.regex = [\u4e00-\u9fa5]{1}[A-Z]{1}[A-Z0-9]{5}|[\u4e00-\u9fa5]{1}[A-Z]{1}[A-Z0-9]{4}[\\u4e00-\\u9fa5]{1}|WJ[\u4e00-\u9fa5]{1}[A-Z0-9]{5}
server.sources.test_source.interceptors.i1.excludeEvents = false


server.sources.test_source.interceptors.i2.type = regex_extractor
server.sources.test_source.interceptors.i2.regex = (\\d\\d\\d\\d)-(\\d\\d)-(\\d\\d)
server.sources.test_source.interceptors.i2.serializers = s1 s2 s3
server.sources.test_source.interceptors.i2.serializers.s1.name = year
server.sources.test_source.interceptors.i2.serializers.s2.name = month
server.sources.test_source.interceptors.i2.serializers.s3.name = day

 

 

# the channel configuration of test_channel
server.channels.test_channel.type = memory
server.channels.test_channel.capacity = 10000
server.channels.test_channel.transactionCapacity = 1000
server.channels.test_channel.channlefullcount = 10
server.channels.test_channel.keep-alive = 3
server.channels.test_channel.byteCapacityBufferPercentage = 20

 


# the sink configuration of test_sink
server.sinks.test_sink.type = hive
server.sinks.test_sink.hive.metastore = thrift://192.168.95.42:21088
server.sinks.test_sink.hive.database = default
server.sinks.test_sink.hive.table = base_kkdata_invalid
server.sinks.test_sink.hive.txnsPerBatchAsk = 2
server.sinks.test_sink.hive.partition = %{year},%{month},%{day}
server.sinks.test_sink.useLocalTimeStamp = false
server.sinks.wulei_sink.hive.batchSize = 10
server.sinks.test_sink.serializer = JSON
server.sinks.test_sink.channel = test_channel

 

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~我是L分割线...~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

 

Hive见表语句:

create table test_wuleiname(id string, name string)
partitioned by (day string)
clustered by (id) into 2 buckets stored as orc
location '/user/hive/warehouse/test_hhh'
TBLPROPERTIES ('transactional'='true');

 

转载于:https://www.cnblogs.com/1023linlin/p/7807701.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值