php flume,Flume连接HDFS和Hive

Flume连接HDFS

进入Flume配置

954735a9f137

954735a9f137

954735a9f137

配置flume.conf

954735a9f137

# Name the components on this agent

a1.sources = r1

a1.sinks = k1

a1.channels = c1

# sources

a1.sources.r1.type = netcat

a1.sources.r1.bind = 0.0.0.0

a1.sources.r1.port = 41414

# sinks

a1.sinks.k1.type = hdfs

a1.sinks.k1.hdfs.path = hdfs://slave1/flume/events/%y-%m-%d/%H%M/%S

a1.sinks.k1.hdfs.filePrefix = events-

a1.sinks.k1.hdfs.round = true

a1.sinks.k1.hdfs.roundValue = 10

a1.sinks.k1.hdfs.roundUnit = minute

a1.sinks.k1.hdfs.useLocalTimeStamp=true

a1.sinks.k1.hdfs.batchSize = 10

a1.sinks.k1.hdfs.fileType = DataStream

# channels

a1.channels.c1.type = memory

a1.channels.c1.capacity = 1000

a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel

a1.sources.r1.channels = c1

a1.sinks.k1.channel = c1

测试telnet通信

telnet slave1 41414

954735a9f137

查看日志找到HDFS文件

954735a9f137

查看文件内容,测试成功

954735a9f137

Windows下Flume连接Hive

954735a9f137

# Name the components on this agent

a1.sources=r1

a1.sinks=k1

a1.channels=c1

# source

a1.sources.r1.type=avro

a1.sources.r1.bind=0.0.0.0

a1.sources.r1.port=43434

# sink

a1.sinks.k1.type = hive

a1.sinks.k1.hive.metastore = thrift://192.168.18.33:9083

a1.sinks.k1.hive.database = bd14

a1.sinks.k1.hive.table = flume_log

a1.sinks.k1.useLocalTimeStamp = true

a1.sinks.k1.serializer = DELIMITED

a1.sinks.k1.serializer.delimiter = "\t"

a1.sinks.k1.serializer.serdeSeparator = '\t'

a1.sinks.k1.serializer.fieldnames = id,time,context

a1.sinks.k1.hive.txnsPerBatchAsk = 5

# channel

a1.channels.c1.type=memory

a1.channels.c1.capacity=1000

a1.channels.c1.transactionCapacity=100

# Bind the source and sink to the channel

a1.sources.r1.channels=c1

a1.sinks.k1.channel=c1

配置Windows下的flume

# Name the components on this agent

a1.sources = r1

a1.sinks = k1

a1.channels = c1

# source

a1.sources.r1.type = spooldir

a1.sources.r1.spoolDir = F:\\test

a1.sources.r1.fileHeader = true

# sink

a1.sinks.k1.type = avro

a1.sinks.k1.hostname = 192.168.18.34

a1.sinks.k1.port = 43434

# channel

a1.channels.c1.type = memory

a1.channels.c1.capacity = 1000

a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel

a1.sources.r1.channels = c1

a1.sinks.k1.channel = c1

在hive中创建日志表

954735a9f137

在flume文档中要求将hive表分桶以及设置为orc格式,测试不声明orc格式,Hive将不会收到数据

create table flume_log(

id int

,time string

,context string

)

clustered by (id) into 3 buckets

stored as orc;

创建日志文件到监控目录F:\test

954735a9f137

在Windows中 flume的bin目录下启动flume

flume-ng.cmd agent -conf-file ../conf/windows.conf -name a1 -property flume.root.logger=INFO,console

在Windows中查找一个log文件拖放到F:\test中,内容如下

954735a9f137

当flume读取完文件后,文件后缀会增加completed

954735a9f137

查看Hive表

954735a9f137

测试成功,本来是想通过impala查询Hive表,但Impala不支持orc格式的Hive表,而flume中sink端需要采用orc格式传输数据,所以只能放弃impala,后续解决问题再进行补充

三、遇到问题

原因:在CDH的Flume中,设置路径只需要IP地址,不需要配置端口

HDFS文件存在乱码

954735a9f137

解决:在flume配置中添加

a1.sinks.k1.hdfs.fileType = DataStream

原因:

hdfs.fileType默认为SequenceFile,会压缩文件

954735a9f137

AvroRuntimeException: Excessively large list allocation request detected: 825373449 items!

954735a9f137

解决:调整flume中java堆栈大小

原因:Flume内存溢出

NoClassDefFoundError: org/apache/hive/hcatalog/streaming/RecordWriter

954735a9f137

解决:

找到Hive的jar包所在目录

954735a9f137

找到Flume的jar包所在目录

954735a9f137

cp /opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/jars/hive-* /opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/flume-ng/lib/

原因:flume缺少了hive的jar包,需要从CDH拷贝

EventDeliveryException: java.lang.NullPointerException: Expected timestamp in the Flume event headers, but it was null

954735a9f137

原因:时间戳参数设置错误

解决:

在flume的conf文件中配置sink端

a1.sinks.k1.hive.useLocalTimeStamp=true

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值