方案一:收集到hdfs中
方案二:插入已经有的表,使用flume收集数据到hive,hive中数据必须以orc格式保存
source 网络日志
channel 本地磁盘+memory,优先使用内存,如果内存使用完毕,就使用本地磁盘作为缓冲
sink hive
a1.sources = s1
a1.channels=c1
a1.sinks=k1
#tcp协议
a1.sources.s1.type = syslogtcp
a1.sources.s1.port= 5140
a1.sources.s1.host= wangfutai
a1.sources.s1.channels = c1
a1.channels = c1
a1.channels.c1.type = SPILLABLEMEMORY
a1.channels.c1.memoryCapacity = 10000
a1.channels.c1.overflowCapacity = 1000000
a1.channels.c1.byteCapacity = 800000
a1.channels.c1.checkpointDir =/home/wangfutai/a/flume/checkPoint
a1.channels.c1.dataDirs = /home/wangfutai/a/flume/data
a1.sinks = k1
a1.sinks.k1.type = hive
a1.sinks.k1.channel = c1
a1.sinks.k1.hive.metastore = thrift://wangfutai:9083
a1.sinks.k1.hive.database = hive
a1.sinks.k1.hive.table = flume
#a1.sinks.k1.hive.partition = as
flume--收集日志到hive
最新推荐文章于 2023-09-17 12:28:26 发布
本文详细介绍了如何使用Apache Flume高效地从多个源收集日志数据,并将其可靠地传输到Hive进行存储和分析。通过配置Flume agent,实现了日志的实时捕获、处理和导入Hive表的过程,助力大数据实时监控和业务洞察。
摘要由CSDN通过智能技术生成